Lab 6: Convolutional Network Architectures - Brain Tumor MRI Images¶

  • Reece Iriye: 48255107
  • Eileen Garcia: 48241821
  • Trevor Dohm: 48376059

0: Imports¶

In [ ]:
# Import Statements

# MacOS Environment
import os
os.environ['KMP_DUPLICATE_LIB_OK'] = 'True'

# Data Manipulation
import numpy as np
import pandas as pd
import glob

# Data Visualization
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.metrics import confusion_matrix, roc_curve, auc
from sklearn.preprocessing import label_binarize
from sklearn.model_selection import StratifiedShuffleSplit

# Image Manipulation
from PIL import Image, ImageEnhance
import cv2

# Machine Learning
import tensorflow as tf
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dense, Dropout, Activation, Flatten, Reshape
from tensorflow.keras.layers import RandomFlip, RandomRotation, RandomTranslation, RandomContrast, RandomBrightness, average
from tensorflow.keras.callbacks import EarlyStopping
from tensorflow.keras.regularizers import l2
from tensorflow.keras.models import Sequential, load_model, Model
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.metrics import Accuracy, Precision, Recall

# Warnings
from typing import List, Dict, Tuple
import warnings
warnings.filterwarnings('ignore')
In [ ]:
# Check Version, GPU Devices
print('Tensorflow Version:', tf.__version__)
tf.config.list_physical_devices('GPU')
Tensorflow Version: 2.14.1
Out[ ]:
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU'),
 PhysicalDevice(name='/physical_device:GPU:1', device_type='GPU'),
 PhysicalDevice(name='/physical_device:GPU:2', device_type='GPU'),
 PhysicalDevice(name='/physical_device:GPU:3', device_type='GPU'),
 PhysicalDevice(name='/physical_device:GPU:4', device_type='GPU'),
 PhysicalDevice(name='/physical_device:GPU:5', device_type='GPU'),
 PhysicalDevice(name='/physical_device:GPU:6', device_type='GPU'),
 PhysicalDevice(name='/physical_device:GPU:7', device_type='GPU')]
In [ ]:
# Set Specific (Unused) GPU
gpus = tf.config.list_physical_devices('GPU')
tf.config.set_visible_devices(gpus[0], 'GPU')

1: Business & Data Understanding¶

1.1: Dataset Overview¶

The "Brain Tumor MRI Dataset" on Kaggle provides a comprehensive collection of human brain MRI images aimed at supporting the accurate detection and classification of brain tumors. Consisting of 7,023 images from three distinct datasets - figshare, SARTAJ, and Br35H - this dataset separates MRI scans of brains into four categories: glioma, meningioma, no tumor, and pituitary. The dataset has seen several changes according to the description on Kaggle, with some glioma images from the SARTAJ dataset being replaced due to inaccuracies, highlighting ongoing modifications being made to the dataset for improved reliability and data quality. With images from the Br35H dataset constituting the no tumor class, users should be mindful that the images in this dataset come in varied sizes, meaning that pre-processing and re-sizing must be employed for us to achieve consistent analysis and improved model accuracy.

1.2: Purpose of Data Collection¶

Brain tumors as a whole pose severe risks given the confined space of the skull, whether or not they are malignant or benign. Growth of these tumors can potentially lead to brain damage and life-threatening situations. Timely detection and precise classification of these tumors are absolutely essential in guiding pre-emptive medical diagnosis before tumors signifcantly effect and harm a patient. With MRI's being a predominant imaging technique in this realm, there is a pressing need for advanced diagnostic models that can detect, classify by type, and pinpoint tumor locations effectively. This dataset, assembled from various sources and continuously refined, aims to provide a rich resource for researchers and data scientists to develop advanced machine learning models to aid in these critical diagnostic tasks.

This MRI dataset from Kaggle has been created with the specific intention to facilitate the development of models capable of detecting the presence of a brain tumor from MRI scans and classifying them by type. Medical practitioners and technicians can then use them as an advisory tool to make more precise diagnoses, leading to more targeted treatment options. Accurate labels are extremely important. It's crucial to acknowledge potential inaccuracies, such as those noted in the SARTAJ dataset, ensuring that machine learning models are trained on the most reliable data available.

1.3: Data Preprocessing Steps¶

1.3.1: Image Dimensions Standardization¶

The Kaggle brain MRI dataset is structured in distinct training and testing folders. We are going to merge all of these together into one dataset then perform our own data splitting in section 2.

All MRI images are likely varying in size, with some images being much larger than others and some images potentially being stretched in ways that could not be interpreted in a standard way by a Convolutional Neural Network or a transformer. We will standardize image size by stretching and/or squeezing images to fit into a $256 \times 256$ pixel dimension. This is a crucial preprocessing step. Neural networks, especially convolutional ones, require a consistent input size. Resizing all images to a uniform shape ensures that the network receives inputs in a consistent format, allowing us to batch process the data. Furthermore, downsizing larger images can also speed up the training process, as smaller images generally require less computational resources. On the other hand, recognizable features can be rendered from larger MRI screens with high dimensions could maybe lost from compression, but we are preferring interpretability in this scenario.

Because the images are MRI scans, we will read in the data and ensure that the images are grayscaled. MRI scans represent variations in tissue properties, not colors as in everyday images. Hence, we read in the data ensuring the images are in grayscale. Grayscaling, in this context, isn't about removing color; instead, it's about representing the MRI scans in a format that aligns with their natural presentation. By using grayscale, we utilize a single channel of floating point values, as opposed to the three channels (RGB) found in colored images. This not only aligns with the nature of MRI scans but also reduces the memory requirement to represent these images by one-third, making processing more efficient.

We will normalize the floating point representation of each grayscaled pixel value by dividing pixel values by 255. This scales all pixel values between 0 and 1. Normalization is an important step in image processing for neural networks, because networks tend to converge faster with normalized inputs. By keeping pixel values within a small range, it ensures that no particular set of weights gets updated disproportionately during the backpropagation process when fine-tuning or training a neural network.

In [ ]:
# Load Train / Test Data
train_data_mri: str = 'Dataset/Training/'
test_data_mri: str = 'Dataset/Testing/'
IMG_SIZE: int = 256
NUM_CLASSES: int = 4

# Label Encoding
labels: List[str] = ['glioma', 'meningioma', 'notumor', 'pituitary']
label_indexes: Dict[str, int] = {
    label: i 
    for i, label in enumerate(labels)
}

# Empty Lists For Data
train_paths: List[np.ndarray] = []  # List of numpy arrays
train_labels: List[int] = []  # List of integers (encoded labels)
test_paths: List[np.ndarray] = []
test_labels: List[int] = []

# Load Training Data
for label in labels:
    for img_file in glob.glob(train_data_mri + f'/{label}/*.jpg'):
        img: np.ndarray = cv2.imread(img_file, 0)  # This loads in grayscale. Remove ',0' for color.
        img = cv2.resize(img, (IMG_SIZE, IMG_SIZE))  # Resize if necessary
        img = img.astype(np.float32) / 255.0  # Normalize pixel values
        train_paths.append(img)
        train_labels.append(label_indexes[label])

# Load Testing Data
for label in labels:
    for img_file in glob.glob(test_data_mri + f'/{label}/*.jpg'):
        img: np.ndarray = cv2.imread(img_file, 0)  # This loads in grayscale. Remove ',0' for color.
        img = cv2.resize(img, (IMG_SIZE, IMG_SIZE))  # Resize if necessary
        img = img.astype(np.float32) / 255.0  # Normalize pixel values
        test_paths.append(img)
        test_labels.append(label_indexes[label])  

# Converting Lists To Numpy Arrays
train_paths = np.array(train_paths)
train_labels: np.ndarray = np.array(train_labels)
test_paths: np.ndarray = np.array(test_paths)
test_labels: np.ndarray = np.array(test_labels)

Ideally, having more images to train on would be preferrable, but 5,712 MRI scans spanning across multiple data sources is strong nonetheless. Diving deeper, let's check the total appearances for each class in the training data.

In [ ]:
# Find Indices For Each Label

train_indices_0: List[int] = np.where(np.array(train_labels) == 0)[0]
train_indices_1: List[int] = np.where(np.array(train_labels) == 1)[0]
train_indices_2: List[int] = np.where(np.array(train_labels) == 2)[0]
train_indices_3: List[int] = np.where(np.array(train_labels) == 3)[0]

test_indices_0: List[int] = np.where(np.array(test_labels) == 0)[0]
test_indices_1: List[int] = np.where(np.array(test_labels) == 1)[0]
test_indices_2: List[int] = np.where(np.array(test_labels) == 2)[0]
test_indices_3: List[int] = np.where(np.array(test_labels) == 3)[0]

# Total Class Distributions

print(f"Glioma Training Data MRI Scans: {len(train_indices_0)}.")
print(f"Meningioma Training Data MRI Scans: {len(train_indices_1)}.")
print(f"No Tumor Training Data MRI Scans: {len(train_indices_2)}.")
print(f"Pituitary Training Data MRI Scans: {len(train_indices_3)}.")
print("Total MRI Scans With Tumor: {}.".format(
    len(train_indices_0) + len(train_indices_1) + len(train_indices_3)
))
Glioma Training Data MRI Scans: 1321.
Meningioma Training Data MRI Scans: 1339.
No Tumor Training Data MRI Scans: 1595.
Pituitary Training Data MRI Scans: 1457.
Total MRI Scans With Tumor: 4117.

The MRI scans labeled as "no tumor" form a significant portion when compared to individual tumor classes. However, overall, the classes are relatively well-balanced. If our goal is to classify the specific type of tumor, this even distribution is advantageous, as it minimizes the risk of the model being overly biased towards a particular class.

On the other hand, if the objective shifts to merely detecting the presence or absence of a tumor, there's a potential pitfall. Out of the 5,712 images, 4,117 showcase some form of tumor. In such a scenario, a binary classification model might be more inclined to predict the presence of a tumor, given that it represents the majority of the dataset. This could lead to the model being overly optimistic about tumor existence. This is actually not terrible problem to have, as a model that overly predicts the existence of a tumor is much stronger than a model that does the opposite. The reason why is because the implications for a false positive for predicting the existence of a tumor is much less costly then a false negative where the model misses a tumor.

1.4: Visualizing Content in the Dataset¶

1.4.1: Displaying Some Images¶

In the plot below, we display some images from the training folder provided by Kaggle. The first 10 images with a label 0 represent a glioma tumor, the next 10 images with a label 1 represent a meningioma tumor, then the 10 after that with the label 2 represent a lack of existence of a tumor, and the last 10 images with the label 3 represent pituitary tumors.

In [ ]:
# Display some images from train_paths
plt.figure(figsize=(30, 30))

# Randomly select 10 indices from each set
train_random_indices_0: List[int] = np.random.choice(train_indices_0, 10, replace=False)
train_random_indices_1: List[int] = np.random.choice(train_indices_1, 10, replace=False)
train_random_indices_2: List[int] = np.random.choice(train_indices_2, 10, replace=False)
train_random_indices_3: List[int] = np.random.choice(train_indices_3, 10, replace=False)

train_plot_indices: List[int] = np.concatenate([
    train_random_indices_0, 
    train_random_indices_1, 
    train_random_indices_2, 
    train_random_indices_3,
])

# Ensure indices are unique and within bounds
train_plot_indices = list(set(train_plot_indices))
train_plot_indices.sort()

for index, (image, label) in enumerate(zip(train_paths[train_plot_indices], train_labels[train_plot_indices])):
    plt.subplot(8, 5, index + 1)
    plt.imshow(np.reshape(image, (IMG_SIZE, IMG_SIZE)), cmap=plt.cm.gray)
    plt.title(f'Training: {label}')
    
plt.show()
No description has been provided for this image

1.4.2: Listing the Overall Class Distribution for all 4 Classes¶

In [ ]:
# Insert the total amount of images that appear in the dataset
TOTAL_IMAGES: int = (
    len(train_indices_0) + len(test_indices_0) + 
    len(train_indices_1) + len(test_indices_1) + 
    len(train_indices_2) + len(test_indices_2) + 
    len(train_indices_3) + len(test_indices_3) 
)

# Total number of each individual class
print(f"Number of Glioma images in dataset: {len(train_indices_0) + len(test_indices_0)}")
print(f"Number of Meningioma images in dataset: {len(train_indices_1) + len(test_indices_1)}")
print(f"Number of No Tumor images in dataset: {len(train_indices_2) + len(test_indices_2)}")
print(f"Number of Pituitary images in dataset: {len(train_indices_3) + len(test_indices_3)}\n")

# Total proportion of each individual class
print(f"Proportion of Glioma images in dataset: {100*(len(train_indices_0) + len(test_indices_0)) / TOTAL_IMAGES:.3f}%")
print(f"Proportion of Meningioma images in dataset: {100*(len(train_indices_1) + len(test_indices_1)) / TOTAL_IMAGES:.3f}%")
print(f"Proportion of No Tumor images in dataset: {100*(len(train_indices_2) + len(test_indices_2)) / TOTAL_IMAGES:.3f}%")
print(f"Proportion of Pituitary images in dataset: {100*(len(train_indices_3) + len(test_indices_3)) / TOTAL_IMAGES:.3f}%\n")

# Print total images
print(f"Total Images in Dataset: {TOTAL_IMAGES}", end="")
Number of Glioma images in dataset: 1621
Number of Meningioma images in dataset: 1645
Number of No Tumor images in dataset: 2000
Number of Pituitary images in dataset: 1757

Proportion of Glioma images in dataset: 23.081%
Proportion of Meningioma images in dataset: 23.423%
Proportion of No Tumor images in dataset: 28.478%
Proportion of Pituitary images in dataset: 25.018%

Total Images in Dataset: 7023

The information above showcases the distribution of classes in our overall dataset that we will use before splitting it into a training and testing set. The distribution of the classes is pretty proportional, with a slightly higher representation of 'No Tumor' images. Understanding distribution is important before splitting the data into training and testing sets, as it tells us how to approach the splitting process in order to maintain a representative sample in both sets.

The dataset shows a relatively balanced distribution among the different tumor types, each around 23-25%. This balance benefits us when training the model since it needs to learn how to identify and distinguish between these tumor types effectively.

However, the 'No Tumor' category is slightly overrepresented when compared to other categories. It is not a severe imbalance, but it is still important to consider during data splitting and model training. The over-representation might reflect the real world scenario where many scans don't show tumors, which makes it a relevant aspect of the dataset.

The above tells us that we may want to use stratified splitting to ensure that each set mirrors the distribution as closely as possible, so that the model generalizes well across all classes and performs reliably on unseen data.

Additionally, given this distribution, metrics like precision, recall, and the F2 score (which emphasizes recall) may be important to understand how well the model performs with respect to each class considering the slight imbalance towards 'No Tumor' images.

The dataset's class distribution provides a solid foundation for building a reliable and effective diagnostic tool. The slight over-representation of 'No Tumor' images reflects a realistic aspect of medical imaging and should be factored into both the training process and the evaluation strategy.

2: Preparation for Modeling and Analysis¶

2.1: Metrics used to evaluate algorithm's performance¶

Our primary goal is to be accurate and reliable in identifying brain tumors. In this case, metrics that emphasize the model's ability to correctly identify positive cases (brains with tumors) are crucial.

Since we are performing multiclass classification in a medical context, the metrics Precision, Recall, F2 Score, and Area Under the ROC Curve (AUC-ROC) for each class can provide us with a detailed understanding of a model's performance across the various classes. We will also take a look at a confusion matrix for the model.

Supplementary Metrics: F2Accuracy, Precision, and Recall¶

Although not the main focus, accuracy, precision, and recall still offer valuable information. Accuracy gives a quick overview of overall performance, precision provides insight into the rate of false positives, and recall complements the F2 Score by highlighting the model's ability to detect actual positives.

Precision in multiclass settings is calculated separately for each class. It calculates the number of correctly-classified instances within all instances labled as that specific class. Precision is crucial when the cost of false positives is high. In this case, falsely identifying a no-tumor scan as a tumor (false positive) may lead to unnecessary stress and additional testing for the patient. However, this is not nearly as costly as false negatives when missing a case where a patient is told they do no not have a tumor or are misdiagnosed with a specific tumor when they do.

Recall in multiclass settings is calculated for each class independently. It measures a model's ability to correctly identify all instances of a particular class. Recall measures the number of correctly identified instances within the actual instances of a class. In medical scenarios such as this, it is important to seek high recall, as missing a true tumor case (false negative) has life-threatening consequences. This is extremely important.

These metrics provide a broader context to understand the model's performance, supporting the more focused insights gained from the F2 Score and Confusion Matrix. The importance of recall leads us to prefer F2 as a comparison metric of choice as well. F2 is an individual metric that does not ignore the fact that it's counterintuitive to provide people with misleading diagnoses, but emphasizes that it's way more costly to miss tumors or misdiagnose tumors.

A confusion matrix is also important, and we should primarily look at the region in the confusion matrix where no tumor is predicted but in reality, a tumor does exist. We can then dive deeper than an F2 score with this visual to see exactly what predictions are flawed.

We personally believe that F2 score and confusion matrices will be our preferred metrics for evaluation, but the ROC curve will also be useful for checking the relationship between precision and recall.

ROC curves can be extended for use in multiclass settings to measure the performance of a model in classifying each class against all others. The AUC-ROC is useful to understand how well a model distinguishes each tumor type from the others.

The multiclass AUC-ROC metric provides a clear, single metric that summarizes the discrimination ability of the model. In this case, this represents insight into how well the model can differentiate between the various types of brain tumors as well as the brains with no tumors. The F2 Score's emphasis on recall is crucial for prioritizing patient safety by minimizing missed tumor detections. The Confusion Matrix provides a clear view of error types, especially false negatives, which is vital for clinical applications. The ROC curve aids in understanding the model's discriminatory power. Meanwhile, accuracy, precision, and recall serve as supplementary metrics providing a broader overview of performance.

2.2: Using Stratified Shuffle Split to Divide our Data into Training and Testing Datasets¶

We will use Stratified Shuffle Split to divide our dataset into training and testing datasets.

This method maintains the original proportion of each class in both the training and testing datasets. This is crucial in medical imaging datasets where class distribution may not be perfectly balanced or where each class's representation is important for accurate diagnosis.

Additionally, using stratified shuffle split allows for repeated random sampling of the data to ensure comprehensive use of all available data.

Stratified Shuffle Split also mimics real-world applications. In practice, a diagnostic model would be expected to perform well on any random set of patients. By creating different random splits, this method simulates this scenario and provides a realistic evaluation of how the model might perform in a clinical setting.

Finally, using stratified shuffle split also addresses overfitting concerns. With limited data, the risk of overfitting is higher. Stratified Shuffle Split mitigates this by using different subsets of data in each iteration, testing the model's generalization capability.

Stratified Shuffle Split offers a balanced approach to maximize the available data, ensures that the class distribution is maintained in each split, and provides a realistic way to evaluate the model's performance.

In a medical context, models need to be robust, accurate, and generalizable to varied subsets of patient data. This method simulates the diversity of real-world medical scenarios and ensures that our model is not just statistically valid but also applicable in a clinical environment.

In [ ]:
# Concatenate Original Train / Test Split Into One Dataset
combined_paths = np.concatenate((train_paths, test_paths), axis = 0)
combined_labels = np.concatenate((train_labels, test_labels), axis = 0)

skf = StratifiedShuffleSplit(n_splits = 10, random_state = 42)
for train_index, test_index in skf.split(combined_paths, combined_labels):
    X_train, X_test = combined_paths[train_index], combined_paths[test_index]
    y_train, y_test = combined_labels[train_index], combined_labels[test_index]

# Train, Evaluate Model Using Train / Test (Check Shape)
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
(6320, 256, 256) (6320,) (703, 256, 256) (703,)
In [ ]:
# Encode Labels For Training
y_train_encoded = to_categorical(y_train, num_classes = NUM_CLASSES)
y_test_encoded = to_categorical(y_test, num_classes = NUM_CLASSES)
In [ ]:
# Create Data Augmentation Generator
datagen = ImageDataGenerator(
    rotation_range = 10,
    zoom_range = 0.1,
    horizontal_flip = True,
    width_shift_range = 0.1,
    height_shift_range = 0.1
)
In [ ]:
# Visualization Parameters
batch_size = 1
augmented_images = []
augmented_labels = []
image_array = np.expand_dims(np.array([image]), axis = -1)
label_array = np.array([label])

# Generate Augmented Images
for _ in range(6):
    for X_batch, y_batch in datagen.flow(image_array, label_array, batch_size = batch_size):
        augmented_images.append(X_batch[0])
        augmented_labels.append(y_batch[0])
        break

# Create Plotting Grid
for i in range(6):
    plt.subplot(230 + 1 + i)
    plt.imshow(augmented_images[i].reshape(256, 256), cmap = plt.get_cmap('gray'))
    plt.title("Label: {}".format(augmented_labels[i]))

# Show Plot
plt.show()
No description has been provided for this image
In [ ]:
# Parameters
f = 4           # No. Filters
l = 4           # No. Layers
k = 3           # Kernel Size (k x k)
lam = 0.0001    # Kernel Regularization Constant (L2)

# Initialize Sequential Network
Conv1A = Sequential()

# Add Augmentations Directly
# Horizontal Flip, 10% Rotation, 10% Move, Brightness / Contrast Adjust 
Conv1A.add( RandomFlip("horizontal") )
Conv1A.add( RandomRotation(0.1) )
Conv1A.add( RandomTranslation(height_factor = 0.1, width_factor = 0.1) )
Conv1A.add( RandomBrightness(factor = 0.1, value_range = (0.0, 1.0)) )
Conv1A.add( RandomContrast(0.1) ) 

# Add Multiple Layers (Changeable)
for i in range(l):
  
    # Add Convolutional Layer, Follow With Pooling
    # Note: Loosely Following Unet Architecture
    Conv1A.add(Conv2D(filters = (f * 2 ** i),
                    input_shape = (IMG_SIZE, IMG_SIZE, 1),
                    kernel_size = (k, k), 
                    kernel_regularizer = l2(lam),
                    kernel_initializer = 'he_uniform',
                    padding = 'same', 
                    activation = 'relu',
                    data_format = 'channels_last'))
    Conv1A.add(MaxPooling2D(pool_size = (2, 2), data_format = 'channels_last'))

# Flatten After Convolutional Layers
Conv1A.add(Flatten())

# Dropout Regularization Unnecessary
Conv1A.add(Dense(NUM_CLASSES, activation = 'softmax', 
                kernel_initializer = 'glorot_uniform',
                kernel_regularizer = l2(lam)
                ))
In [ ]:
# Define custom F2 score as a Keras metric
class F2Score(tf.keras.metrics.Metric):
    def __init__(self, name='f2_score', **kwargs):
        super(F2Score, self).__init__(name=name, **kwargs)
        self.precision = Precision()
        self.recall = Recall()
        self.beta_squared = 2**2

    def update_state(self, y_true, y_pred, sample_weight=None):
        y_pred = tf.round(y_pred)
        self.precision.update_state(y_true, y_pred, sample_weight)
        self.recall.update_state(y_true, y_pred, sample_weight)

    def result(self):
        precision = self.precision.result()
        recall = self.recall.result()
        return ((1 + self.beta_squared) * precision * recall / 
                (self.beta_squared * precision + recall + tf.keras.backend.epsilon()))

    def reset_states(self):
        self.precision.reset_states()
        self.recall.reset_states()
In [ ]:
# Function to plot ROC curve and calculate AUC
def plot_roc_curve(y_test, y_pred, n_classes, title):
    # Binarize the output labels for each class
    y_test_binarized = label_binarize(y_test, classes=np.arange(n_classes))

    # Compute ROC curve and ROC area for each class
    fpr = dict()
    tpr = dict()
    roc_auc = dict()
    for i in range(n_classes):
        fpr[i], tpr[i], _ = roc_curve(y_test_binarized[:, i], y_pred[:, i])
        roc_auc[i] = auc(fpr[i], tpr[i])

    # Compute micro-average ROC curve and ROC area
    fpr["micro"], tpr["micro"], _ = roc_curve(y_test_binarized.ravel(), y_pred.ravel())
    roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])

    plt.figure()
    lw = 2
    plt.plot(fpr["micro"], tpr["micro"],
             label='micro-average ROC curve (area = {0:0.2f})'
                   ''.format(roc_auc["micro"]),
             color='deeppink', linestyle=':', linewidth=4)

    for i in range(n_classes):
        plt.plot(fpr[i], tpr[i], lw=lw,
                 label='ROC curve of class {0} (area = {1:0.2f})'
                 ''.format(i, roc_auc[i]))

    plt.plot([0, 1], [0, 1], 'k--', lw=lw)
    plt.xlim([0.0, 1.0])
    plt.ylim([0.0, 1.05])
    plt.xlabel('False Positive Rate')
    plt.ylabel('True Positive Rate')
    plt.title(f'ROC and AUC for {title}')
    plt.legend(loc="lower right")
    plt.show()
In [ ]:
# Train With CC, Adam
Conv1A.compile(loss = 'categorical_crossentropy',
               optimizer = 'adam',
               metrics = ['accuracy', Precision(), Recall(), F2Score()])

# Build Model With Basic Parameters (Build For Grayscale Images)
Conv1A.build((None, IMG_SIZE, IMG_SIZE, 1))
Conv1A.summary()

# Fit Model (High Patience For Full Convergence)
H1A = Conv1A.fit(X_train, y_train_encoded, 
          batch_size = 32,
          epochs = 100, 
          verbose = 1,
          validation_data = (X_test, y_test_encoded))
Model: "sequential_1"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 random_flip_1 (RandomFlip)  (None, 256, 256, 1)       0         
                                                                 
 random_rotation_1 (RandomR  (None, 256, 256, 1)       0         
 otation)                                                        
                                                                 
 random_translation_1 (Rand  (None, 256, 256, 1)       0         
 omTranslation)                                                  
                                                                 
 random_brightness_1 (Rando  (None, 256, 256, 1)       0         
 mBrightness)                                                    
                                                                 
 random_contrast_1 (RandomC  (None, 256, 256, 1)       0         
 ontrast)                                                        
                                                                 
 conv2d_4 (Conv2D)           (None, 256, 256, 4)       40        
                                                                 
 max_pooling2d_4 (MaxPoolin  (None, 128, 128, 4)       0         
 g2D)                                                            
                                                                 
 conv2d_5 (Conv2D)           (None, 128, 128, 8)       296       
                                                                 
 max_pooling2d_5 (MaxPoolin  (None, 64, 64, 8)         0         
 g2D)                                                            
                                                                 
 conv2d_6 (Conv2D)           (None, 64, 64, 16)        1168      
                                                                 
 max_pooling2d_6 (MaxPoolin  (None, 32, 32, 16)        0         
 g2D)                                                            
                                                                 
 conv2d_7 (Conv2D)           (None, 32, 32, 32)        4640      
                                                                 
 max_pooling2d_7 (MaxPoolin  (None, 16, 16, 32)        0         
 g2D)                                                            
                                                                 
 flatten_1 (Flatten)         (None, 8192)              0         
                                                                 
 dense_1 (Dense)             (None, 4)                 32772     
                                                                 
=================================================================
Total params: 38916 (152.02 KB)
Trainable params: 38916 (152.02 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
Epoch 1/100
198/198 [==============================] - 4s 10ms/step - loss: 0.2143 - accuracy: 0.9438 - precision_4: 0.9470 - recall_4: 0.9394 - f2_score: 0.9409 - val_loss: 0.2552 - val_accuracy: 0.9175 - val_precision_4: 0.9211 - val_recall_4: 0.9132 - val_f2_score: 0.9148
Epoch 2/100
198/198 [==============================] - 2s 11ms/step - loss: 0.2048 - accuracy: 0.9422 - precision_4: 0.9463 - recall_4: 0.9375 - f2_score: 0.9393 - val_loss: 0.2285 - val_accuracy: 0.9260 - val_precision_4: 0.9298 - val_recall_4: 0.9232 - val_f2_score: 0.9245
Epoch 3/100
198/198 [==============================] - 2s 9ms/step - loss: 0.2022 - accuracy: 0.9489 - precision_4: 0.9532 - recall_4: 0.9451 - f2_score: 0.9467 - val_loss: 0.2211 - val_accuracy: 0.9317 - val_precision_4: 0.9371 - val_recall_4: 0.9317 - val_f2_score: 0.9328
Epoch 4/100
198/198 [==============================] - 2s 9ms/step - loss: 0.2132 - accuracy: 0.9438 - precision_4: 0.9484 - recall_4: 0.9397 - f2_score: 0.9414 - val_loss: 0.2638 - val_accuracy: 0.9232 - val_precision_4: 0.9266 - val_recall_4: 0.9161 - val_f2_score: 0.9182
Epoch 5/100
198/198 [==============================] - 3s 13ms/step - loss: 0.2037 - accuracy: 0.9472 - precision_4: 0.9502 - recall_4: 0.9413 - f2_score: 0.9431 - val_loss: 0.2462 - val_accuracy: 0.9189 - val_precision_4: 0.9233 - val_recall_4: 0.9075 - val_f2_score: 0.9106
Epoch 6/100
198/198 [==============================] - 2s 8ms/step - loss: 0.1993 - accuracy: 0.9459 - precision_4: 0.9490 - recall_4: 0.9416 - f2_score: 0.9431 - val_loss: 0.1803 - val_accuracy: 0.9417 - val_precision_4: 0.9457 - val_recall_4: 0.9417 - val_f2_score: 0.9425
Epoch 7/100
198/198 [==============================] - 2s 9ms/step - loss: 0.2080 - accuracy: 0.9451 - precision_4: 0.9509 - recall_4: 0.9400 - f2_score: 0.9422 - val_loss: 0.2186 - val_accuracy: 0.9445 - val_precision_4: 0.9509 - val_recall_4: 0.9360 - val_f2_score: 0.9389
Epoch 8/100
198/198 [==============================] - 2s 9ms/step - loss: 0.2155 - accuracy: 0.9429 - precision_4: 0.9473 - recall_4: 0.9386 - f2_score: 0.9403 - val_loss: 0.2174 - val_accuracy: 0.9403 - val_precision_4: 0.9469 - val_recall_4: 0.9388 - val_f2_score: 0.9404
Epoch 9/100
198/198 [==============================] - 2s 11ms/step - loss: 0.2020 - accuracy: 0.9462 - precision_4: 0.9505 - recall_4: 0.9419 - f2_score: 0.9436 - val_loss: 0.2315 - val_accuracy: 0.9403 - val_precision_4: 0.9442 - val_recall_4: 0.9388 - val_f2_score: 0.9399
Epoch 10/100
198/198 [==============================] - 2s 9ms/step - loss: 0.2013 - accuracy: 0.9440 - precision_4: 0.9491 - recall_4: 0.9384 - f2_score: 0.9406 - val_loss: 0.2023 - val_accuracy: 0.9488 - val_precision_4: 0.9553 - val_recall_4: 0.9431 - val_f2_score: 0.9455
Epoch 11/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1990 - accuracy: 0.9476 - precision_4: 0.9538 - recall_4: 0.9437 - f2_score: 0.9457 - val_loss: 0.2054 - val_accuracy: 0.9346 - val_precision_4: 0.9386 - val_recall_4: 0.9346 - val_f2_score: 0.9354
Epoch 12/100
198/198 [==============================] - 2s 12ms/step - loss: 0.1962 - accuracy: 0.9494 - precision_4: 0.9523 - recall_4: 0.9443 - f2_score: 0.9459 - val_loss: 0.2226 - val_accuracy: 0.9360 - val_precision_4: 0.9413 - val_recall_4: 0.9346 - val_f2_score: 0.9359
Epoch 13/100
198/198 [==============================] - 2s 10ms/step - loss: 0.2045 - accuracy: 0.9459 - precision_4: 0.9506 - recall_4: 0.9410 - f2_score: 0.9429 - val_loss: 0.1899 - val_accuracy: 0.9445 - val_precision_4: 0.9471 - val_recall_4: 0.9431 - val_f2_score: 0.9439
Epoch 14/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1954 - accuracy: 0.9500 - precision_4: 0.9534 - recall_4: 0.9448 - f2_score: 0.9465 - val_loss: 0.3136 - val_accuracy: 0.8990 - val_precision_4: 0.9039 - val_recall_4: 0.8962 - val_f2_score: 0.8977
Epoch 15/100
198/198 [==============================] - 2s 9ms/step - loss: 0.2020 - accuracy: 0.9457 - precision_4: 0.9502 - recall_4: 0.9426 - f2_score: 0.9441 - val_loss: 0.1859 - val_accuracy: 0.9374 - val_precision_4: 0.9425 - val_recall_4: 0.9331 - val_f2_score: 0.9350
Epoch 16/100
198/198 [==============================] - 2s 10ms/step - loss: 0.1985 - accuracy: 0.9481 - precision_4: 0.9531 - recall_4: 0.9451 - f2_score: 0.9467 - val_loss: 0.2127 - val_accuracy: 0.9403 - val_precision_4: 0.9412 - val_recall_4: 0.9331 - val_f2_score: 0.9347
Epoch 17/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1941 - accuracy: 0.9495 - precision_4: 0.9542 - recall_4: 0.9465 - f2_score: 0.9480 - val_loss: 0.2859 - val_accuracy: 0.9061 - val_precision_4: 0.9137 - val_recall_4: 0.9033 - val_f2_score: 0.9053
Epoch 18/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1980 - accuracy: 0.9498 - precision_4: 0.9552 - recall_4: 0.9470 - f2_score: 0.9486 - val_loss: 0.2171 - val_accuracy: 0.9317 - val_precision_4: 0.9364 - val_recall_4: 0.9218 - val_f2_score: 0.9247
Epoch 19/100
198/198 [==============================] - 3s 14ms/step - loss: 0.1959 - accuracy: 0.9506 - precision_4: 0.9538 - recall_4: 0.9441 - f2_score: 0.9461 - val_loss: 0.2422 - val_accuracy: 0.9275 - val_precision_4: 0.9300 - val_recall_4: 0.9260 - val_f2_score: 0.9268
Epoch 20/100
198/198 [==============================] - 2s 9ms/step - loss: 0.2075 - accuracy: 0.9438 - precision_4: 0.9486 - recall_4: 0.9400 - f2_score: 0.9417 - val_loss: 0.2347 - val_accuracy: 0.9360 - val_precision_4: 0.9397 - val_recall_4: 0.9303 - val_f2_score: 0.9322
Epoch 21/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1993 - accuracy: 0.9495 - precision_4: 0.9535 - recall_4: 0.9449 - f2_score: 0.9466 - val_loss: 0.2020 - val_accuracy: 0.9502 - val_precision_4: 0.9513 - val_recall_4: 0.9445 - val_f2_score: 0.9459
Epoch 22/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1913 - accuracy: 0.9509 - precision_4: 0.9544 - recall_4: 0.9475 - f2_score: 0.9488 - val_loss: 0.2181 - val_accuracy: 0.9388 - val_precision_4: 0.9483 - val_recall_4: 0.9388 - val_f2_score: 0.9407
Epoch 23/100
198/198 [==============================] - 2s 11ms/step - loss: 0.1980 - accuracy: 0.9494 - precision_4: 0.9527 - recall_4: 0.9468 - f2_score: 0.9480 - val_loss: 0.1882 - val_accuracy: 0.9488 - val_precision_4: 0.9514 - val_recall_4: 0.9459 - val_f2_score: 0.9470
Epoch 24/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1896 - accuracy: 0.9517 - precision_4: 0.9541 - recall_4: 0.9475 - f2_score: 0.9488 - val_loss: 0.2470 - val_accuracy: 0.9275 - val_precision_4: 0.9311 - val_recall_4: 0.9232 - val_f2_score: 0.9248
Epoch 25/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1972 - accuracy: 0.9473 - precision_4: 0.9527 - recall_4: 0.9440 - f2_score: 0.9457 - val_loss: 0.2725 - val_accuracy: 0.9203 - val_precision_4: 0.9291 - val_recall_4: 0.9132 - val_f2_score: 0.9164
Epoch 26/100
198/198 [==============================] - 2s 12ms/step - loss: 0.1994 - accuracy: 0.9473 - precision_4: 0.9507 - recall_4: 0.9432 - f2_score: 0.9447 - val_loss: 0.2114 - val_accuracy: 0.9374 - val_precision_4: 0.9440 - val_recall_4: 0.9346 - val_f2_score: 0.9364
Epoch 27/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1935 - accuracy: 0.9508 - precision_4: 0.9532 - recall_4: 0.9470 - f2_score: 0.9482 - val_loss: 0.1928 - val_accuracy: 0.9516 - val_precision_4: 0.9516 - val_recall_4: 0.9516 - val_f2_score: 0.9516
Epoch 28/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1922 - accuracy: 0.9508 - precision_4: 0.9550 - recall_4: 0.9479 - f2_score: 0.9494 - val_loss: 0.2279 - val_accuracy: 0.9488 - val_precision_4: 0.9568 - val_recall_4: 0.9459 - val_f2_score: 0.9481
Epoch 29/100
198/198 [==============================] - 2s 10ms/step - loss: 0.1996 - accuracy: 0.9475 - precision_4: 0.9508 - recall_4: 0.9427 - f2_score: 0.9443 - val_loss: 0.2468 - val_accuracy: 0.9360 - val_precision_4: 0.9411 - val_recall_4: 0.9317 - val_f2_score: 0.9336
Epoch 30/100
198/198 [==============================] - 2s 10ms/step - loss: 0.2009 - accuracy: 0.9483 - precision_4: 0.9519 - recall_4: 0.9434 - f2_score: 0.9451 - val_loss: 0.2243 - val_accuracy: 0.9388 - val_precision_4: 0.9426 - val_recall_4: 0.9346 - val_f2_score: 0.9362
Epoch 31/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1936 - accuracy: 0.9505 - precision_4: 0.9548 - recall_4: 0.9454 - f2_score: 0.9473 - val_loss: 0.1903 - val_accuracy: 0.9531 - val_precision_4: 0.9555 - val_recall_4: 0.9474 - val_f2_score: 0.9490
Epoch 32/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1913 - accuracy: 0.9527 - precision_4: 0.9554 - recall_4: 0.9502 - f2_score: 0.9512 - val_loss: 0.1967 - val_accuracy: 0.9531 - val_precision_4: 0.9529 - val_recall_4: 0.9488 - val_f2_score: 0.9496
Epoch 33/100
198/198 [==============================] - 3s 13ms/step - loss: 0.1908 - accuracy: 0.9494 - precision_4: 0.9536 - recall_4: 0.9459 - f2_score: 0.9474 - val_loss: 0.2365 - val_accuracy: 0.9431 - val_precision_4: 0.9484 - val_recall_4: 0.9403 - val_f2_score: 0.9419
Epoch 34/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1871 - accuracy: 0.9487 - precision_4: 0.9532 - recall_4: 0.9440 - f2_score: 0.9458 - val_loss: 0.1994 - val_accuracy: 0.9488 - val_precision_4: 0.9488 - val_recall_4: 0.9488 - val_f2_score: 0.9488
Epoch 35/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1936 - accuracy: 0.9522 - precision_4: 0.9560 - recall_4: 0.9486 - f2_score: 0.9500 - val_loss: 0.1574 - val_accuracy: 0.9545 - val_precision_4: 0.9543 - val_recall_4: 0.9502 - val_f2_score: 0.9510
Epoch 36/100
198/198 [==============================] - 2s 9ms/step - loss: 0.2055 - accuracy: 0.9448 - precision_4: 0.9494 - recall_4: 0.9410 - f2_score: 0.9427 - val_loss: 0.3975 - val_accuracy: 0.8876 - val_precision_4: 0.8960 - val_recall_4: 0.8819 - val_f2_score: 0.8847
Epoch 37/100
198/198 [==============================] - 2s 10ms/step - loss: 0.1983 - accuracy: 0.9491 - precision_4: 0.9537 - recall_4: 0.9451 - f2_score: 0.9468 - val_loss: 0.2133 - val_accuracy: 0.9403 - val_precision_4: 0.9455 - val_recall_4: 0.9374 - val_f2_score: 0.9390
Epoch 38/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1872 - accuracy: 0.9530 - precision_4: 0.9562 - recall_4: 0.9502 - f2_score: 0.9514 - val_loss: 0.2107 - val_accuracy: 0.9360 - val_precision_4: 0.9413 - val_recall_4: 0.9346 - val_f2_score: 0.9359
Epoch 39/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1925 - accuracy: 0.9505 - precision_4: 0.9548 - recall_4: 0.9483 - f2_score: 0.9496 - val_loss: 0.2253 - val_accuracy: 0.9346 - val_precision_4: 0.9397 - val_recall_4: 0.9317 - val_f2_score: 0.9333
Epoch 40/100
198/198 [==============================] - 3s 13ms/step - loss: 0.1983 - accuracy: 0.9441 - precision_4: 0.9482 - recall_4: 0.9416 - f2_score: 0.9429 - val_loss: 0.1776 - val_accuracy: 0.9531 - val_precision_4: 0.9543 - val_recall_4: 0.9502 - val_f2_score: 0.9510
Epoch 41/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1954 - accuracy: 0.9506 - precision_4: 0.9533 - recall_4: 0.9465 - f2_score: 0.9479 - val_loss: 0.2007 - val_accuracy: 0.9488 - val_precision_4: 0.9553 - val_recall_4: 0.9417 - val_f2_score: 0.9444
Epoch 42/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1851 - accuracy: 0.9522 - precision_4: 0.9566 - recall_4: 0.9484 - f2_score: 0.9500 - val_loss: 0.2724 - val_accuracy: 0.9346 - val_precision_4: 0.9397 - val_recall_4: 0.9303 - val_f2_score: 0.9322
Epoch 43/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1856 - accuracy: 0.9511 - precision_4: 0.9555 - recall_4: 0.9479 - f2_score: 0.9494 - val_loss: 0.2530 - val_accuracy: 0.9289 - val_precision_4: 0.9342 - val_recall_4: 0.9289 - val_f2_score: 0.9299
Epoch 44/100
198/198 [==============================] - 2s 11ms/step - loss: 0.1980 - accuracy: 0.9495 - precision_4: 0.9532 - recall_4: 0.9453 - f2_score: 0.9468 - val_loss: 0.2953 - val_accuracy: 0.9189 - val_precision_4: 0.9266 - val_recall_4: 0.9161 - val_f2_score: 0.9182
Epoch 45/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1955 - accuracy: 0.9489 - precision_4: 0.9528 - recall_4: 0.9460 - f2_score: 0.9474 - val_loss: 0.2396 - val_accuracy: 0.9360 - val_precision_4: 0.9358 - val_recall_4: 0.9331 - val_f2_score: 0.9337
Epoch 46/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1888 - accuracy: 0.9503 - precision_4: 0.9543 - recall_4: 0.9481 - f2_score: 0.9493 - val_loss: 0.2269 - val_accuracy: 0.9417 - val_precision_4: 0.9470 - val_recall_4: 0.9403 - val_f2_score: 0.9416
Epoch 47/100
198/198 [==============================] - 3s 13ms/step - loss: 0.1873 - accuracy: 0.9503 - precision_4: 0.9547 - recall_4: 0.9473 - f2_score: 0.9488 - val_loss: 0.2401 - val_accuracy: 0.9346 - val_precision_4: 0.9385 - val_recall_4: 0.9331 - val_f2_score: 0.9342
Epoch 48/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1939 - accuracy: 0.9475 - precision_4: 0.9518 - recall_4: 0.9440 - f2_score: 0.9455 - val_loss: 0.1936 - val_accuracy: 0.9488 - val_precision_4: 0.9513 - val_recall_4: 0.9445 - val_f2_score: 0.9459
Epoch 49/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1803 - accuracy: 0.9581 - precision_4: 0.9619 - recall_4: 0.9551 - f2_score: 0.9564 - val_loss: 0.1999 - val_accuracy: 0.9474 - val_precision_4: 0.9500 - val_recall_4: 0.9459 - val_f2_score: 0.9468
Epoch 50/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1957 - accuracy: 0.9468 - precision_4: 0.9506 - recall_4: 0.9434 - f2_score: 0.9448 - val_loss: 0.1738 - val_accuracy: 0.9587 - val_precision_4: 0.9614 - val_recall_4: 0.9559 - val_f2_score: 0.9570
Epoch 51/100
198/198 [==============================] - 2s 10ms/step - loss: 0.1947 - accuracy: 0.9462 - precision_4: 0.9503 - recall_4: 0.9411 - f2_score: 0.9430 - val_loss: 0.2235 - val_accuracy: 0.9488 - val_precision_4: 0.9513 - val_recall_4: 0.9445 - val_f2_score: 0.9459
Epoch 52/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1883 - accuracy: 0.9489 - precision_4: 0.9533 - recall_4: 0.9459 - f2_score: 0.9474 - val_loss: 0.2134 - val_accuracy: 0.9445 - val_precision_4: 0.9484 - val_recall_4: 0.9417 - val_f2_score: 0.9430
Epoch 53/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1908 - accuracy: 0.9505 - precision_4: 0.9543 - recall_4: 0.9459 - f2_score: 0.9476 - val_loss: 0.2846 - val_accuracy: 0.9218 - val_precision_4: 0.9281 - val_recall_4: 0.9175 - val_f2_score: 0.9196
Epoch 54/100
198/198 [==============================] - 3s 14ms/step - loss: 0.1833 - accuracy: 0.9538 - precision_4: 0.9565 - recall_4: 0.9505 - f2_score: 0.9517 - val_loss: 0.2215 - val_accuracy: 0.9388 - val_precision_4: 0.9440 - val_recall_4: 0.9360 - val_f2_score: 0.9376
Epoch 55/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1809 - accuracy: 0.9566 - precision_4: 0.9591 - recall_4: 0.9532 - f2_score: 0.9543 - val_loss: 0.2071 - val_accuracy: 0.9403 - val_precision_4: 0.9454 - val_recall_4: 0.9360 - val_f2_score: 0.9379
Epoch 56/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1948 - accuracy: 0.9502 - precision_4: 0.9544 - recall_4: 0.9465 - f2_score: 0.9481 - val_loss: 0.3056 - val_accuracy: 0.9132 - val_precision_4: 0.9155 - val_recall_4: 0.9090 - val_f2_score: 0.9103
Epoch 57/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1814 - accuracy: 0.9541 - precision_4: 0.9588 - recall_4: 0.9503 - f2_score: 0.9520 - val_loss: 0.2421 - val_accuracy: 0.9360 - val_precision_4: 0.9400 - val_recall_4: 0.9360 - val_f2_score: 0.9368
Epoch 58/100
198/198 [==============================] - 2s 10ms/step - loss: 0.1876 - accuracy: 0.9536 - precision_4: 0.9585 - recall_4: 0.9506 - f2_score: 0.9522 - val_loss: 0.2002 - val_accuracy: 0.9516 - val_precision_4: 0.9556 - val_recall_4: 0.9488 - val_f2_score: 0.9501
Epoch 59/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1798 - accuracy: 0.9585 - precision_4: 0.9611 - recall_4: 0.9549 - f2_score: 0.9561 - val_loss: 0.2309 - val_accuracy: 0.9417 - val_precision_4: 0.9456 - val_recall_4: 0.9403 - val_f2_score: 0.9413
Epoch 60/100
198/198 [==============================] - 2s 10ms/step - loss: 0.1974 - accuracy: 0.9497 - precision_4: 0.9541 - recall_4: 0.9476 - f2_score: 0.9489 - val_loss: 0.1865 - val_accuracy: 0.9502 - val_precision_4: 0.9540 - val_recall_4: 0.9445 - val_f2_score: 0.9464
Epoch 61/100
198/198 [==============================] - 2s 12ms/step - loss: 0.1916 - accuracy: 0.9528 - precision_4: 0.9561 - recall_4: 0.9486 - f2_score: 0.9501 - val_loss: 0.2241 - val_accuracy: 0.9360 - val_precision_4: 0.9385 - val_recall_4: 0.9331 - val_f2_score: 0.9342
Epoch 62/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1815 - accuracy: 0.9547 - precision_4: 0.9581 - recall_4: 0.9505 - f2_score: 0.9520 - val_loss: 0.2034 - val_accuracy: 0.9417 - val_precision_4: 0.9443 - val_recall_4: 0.9403 - val_f2_score: 0.9411
Epoch 63/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1907 - accuracy: 0.9533 - precision_4: 0.9573 - recall_4: 0.9511 - f2_score: 0.9523 - val_loss: 0.2056 - val_accuracy: 0.9474 - val_precision_4: 0.9526 - val_recall_4: 0.9431 - val_f2_score: 0.9450
Epoch 64/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1767 - accuracy: 0.9552 - precision_4: 0.9584 - recall_4: 0.9525 - f2_score: 0.9537 - val_loss: 0.1928 - val_accuracy: 0.9545 - val_precision_4: 0.9544 - val_recall_4: 0.9531 - val_f2_score: 0.9533
Epoch 65/100
198/198 [==============================] - 2s 11ms/step - loss: 0.1822 - accuracy: 0.9541 - precision_4: 0.9565 - recall_4: 0.9503 - f2_score: 0.9516 - val_loss: 0.2282 - val_accuracy: 0.9374 - val_precision_4: 0.9413 - val_recall_4: 0.9360 - val_f2_score: 0.9371
Epoch 66/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1920 - accuracy: 0.9513 - precision_4: 0.9546 - recall_4: 0.9475 - f2_score: 0.9489 - val_loss: 0.1753 - val_accuracy: 0.9559 - val_precision_4: 0.9613 - val_recall_4: 0.9545 - val_f2_score: 0.9558
Epoch 67/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1815 - accuracy: 0.9560 - precision_4: 0.9586 - recall_4: 0.9528 - f2_score: 0.9540 - val_loss: 0.1795 - val_accuracy: 0.9559 - val_precision_4: 0.9598 - val_recall_4: 0.9516 - val_f2_score: 0.9533
Epoch 68/100
198/198 [==============================] - 3s 13ms/step - loss: 0.1895 - accuracy: 0.9535 - precision_4: 0.9568 - recall_4: 0.9498 - f2_score: 0.9512 - val_loss: 0.1833 - val_accuracy: 0.9474 - val_precision_4: 0.9499 - val_recall_4: 0.9445 - val_f2_score: 0.9456
Epoch 69/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1884 - accuracy: 0.9525 - precision_4: 0.9559 - recall_4: 0.9498 - f2_score: 0.9510 - val_loss: 0.1746 - val_accuracy: 0.9545 - val_precision_4: 0.9585 - val_recall_4: 0.9531 - val_f2_score: 0.9541
Epoch 70/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1823 - accuracy: 0.9551 - precision_4: 0.9578 - recall_4: 0.9514 - f2_score: 0.9527 - val_loss: 0.1697 - val_accuracy: 0.9587 - val_precision_4: 0.9629 - val_recall_4: 0.9587 - val_f2_score: 0.9596
Epoch 71/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1829 - accuracy: 0.9551 - precision_4: 0.9607 - recall_4: 0.9527 - f2_score: 0.9543 - val_loss: 0.2152 - val_accuracy: 0.9360 - val_precision_4: 0.9426 - val_recall_4: 0.9346 - val_f2_score: 0.9362
Epoch 72/100
198/198 [==============================] - 2s 11ms/step - loss: 0.1810 - accuracy: 0.9559 - precision_4: 0.9598 - recall_4: 0.9519 - f2_score: 0.9535 - val_loss: 0.1719 - val_accuracy: 0.9602 - val_precision_4: 0.9641 - val_recall_4: 0.9559 - val_f2_score: 0.9575
Epoch 73/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1836 - accuracy: 0.9557 - precision_4: 0.9592 - recall_4: 0.9522 - f2_score: 0.9536 - val_loss: 0.1475 - val_accuracy: 0.9630 - val_precision_4: 0.9656 - val_recall_4: 0.9573 - val_f2_score: 0.9590
Epoch 74/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1913 - accuracy: 0.9511 - precision_4: 0.9553 - recall_4: 0.9478 - f2_score: 0.9493 - val_loss: 0.2351 - val_accuracy: 0.9417 - val_precision_4: 0.9481 - val_recall_4: 0.9346 - val_f2_score: 0.9372
Epoch 75/100
198/198 [==============================] - 3s 13ms/step - loss: 0.1785 - accuracy: 0.9538 - precision_4: 0.9571 - recall_4: 0.9494 - f2_score: 0.9509 - val_loss: 0.1899 - val_accuracy: 0.9516 - val_precision_4: 0.9555 - val_recall_4: 0.9459 - val_f2_score: 0.9478
Epoch 76/100
198/198 [==============================] - 2s 10ms/step - loss: 0.1842 - accuracy: 0.9535 - precision_4: 0.9579 - recall_4: 0.9497 - f2_score: 0.9513 - val_loss: 0.2118 - val_accuracy: 0.9445 - val_precision_4: 0.9499 - val_recall_4: 0.9445 - val_f2_score: 0.9456
Epoch 77/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1828 - accuracy: 0.9544 - precision_4: 0.9575 - recall_4: 0.9508 - f2_score: 0.9521 - val_loss: 0.2326 - val_accuracy: 0.9346 - val_precision_4: 0.9397 - val_recall_4: 0.9317 - val_f2_score: 0.9333
Epoch 78/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1738 - accuracy: 0.9566 - precision_4: 0.9590 - recall_4: 0.9544 - f2_score: 0.9553 - val_loss: 0.2525 - val_accuracy: 0.9331 - val_precision_4: 0.9358 - val_recall_4: 0.9331 - val_f2_score: 0.9337
Epoch 79/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1817 - accuracy: 0.9544 - precision_4: 0.9586 - recall_4: 0.9519 - f2_score: 0.9532 - val_loss: 0.2414 - val_accuracy: 0.9388 - val_precision_4: 0.9399 - val_recall_4: 0.9346 - val_f2_score: 0.9356
Epoch 80/100
198/198 [==============================] - 2s 8ms/step - loss: 0.1767 - accuracy: 0.9570 - precision_4: 0.9603 - recall_4: 0.9538 - f2_score: 0.9551 - val_loss: 0.2007 - val_accuracy: 0.9516 - val_precision_4: 0.9570 - val_recall_4: 0.9502 - val_f2_score: 0.9516
Epoch 81/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1851 - accuracy: 0.9535 - precision_4: 0.9565 - recall_4: 0.9509 - f2_score: 0.9521 - val_loss: 0.2113 - val_accuracy: 0.9488 - val_precision_4: 0.9513 - val_recall_4: 0.9445 - val_f2_score: 0.9459
Epoch 82/100
198/198 [==============================] - 3s 14ms/step - loss: 0.1783 - accuracy: 0.9560 - precision_4: 0.9604 - recall_4: 0.9528 - f2_score: 0.9544 - val_loss: 0.2472 - val_accuracy: 0.9303 - val_precision_4: 0.9340 - val_recall_4: 0.9260 - val_f2_score: 0.9276
Epoch 83/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1714 - accuracy: 0.9581 - precision_4: 0.9612 - recall_4: 0.9555 - f2_score: 0.9567 - val_loss: 0.1627 - val_accuracy: 0.9616 - val_precision_4: 0.9614 - val_recall_4: 0.9573 - val_f2_score: 0.9581
Epoch 84/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1694 - accuracy: 0.9579 - precision_4: 0.9599 - recall_4: 0.9544 - f2_score: 0.9555 - val_loss: 0.1648 - val_accuracy: 0.9630 - val_precision_4: 0.9670 - val_recall_4: 0.9602 - val_f2_score: 0.9615
Epoch 85/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1842 - accuracy: 0.9544 - precision_4: 0.9580 - recall_4: 0.9522 - f2_score: 0.9534 - val_loss: 0.2086 - val_accuracy: 0.9317 - val_precision_4: 0.9371 - val_recall_4: 0.9317 - val_f2_score: 0.9328
Epoch 86/100
198/198 [==============================] - 2s 10ms/step - loss: 0.1873 - accuracy: 0.9519 - precision_4: 0.9566 - recall_4: 0.9479 - f2_score: 0.9497 - val_loss: 0.1695 - val_accuracy: 0.9573 - val_precision_4: 0.9573 - val_recall_4: 0.9559 - val_f2_score: 0.9562
Epoch 87/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1810 - accuracy: 0.9573 - precision_4: 0.9606 - recall_4: 0.9535 - f2_score: 0.9549 - val_loss: 0.2569 - val_accuracy: 0.9289 - val_precision_4: 0.9315 - val_recall_4: 0.9289 - val_f2_score: 0.9294
Epoch 88/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1839 - accuracy: 0.9563 - precision_4: 0.9593 - recall_4: 0.9516 - f2_score: 0.9531 - val_loss: 0.1642 - val_accuracy: 0.9559 - val_precision_4: 0.9571 - val_recall_4: 0.9516 - val_f2_score: 0.9527
Epoch 89/100
198/198 [==============================] - 2s 12ms/step - loss: 0.1714 - accuracy: 0.9573 - precision_4: 0.9603 - recall_4: 0.9538 - f2_score: 0.9551 - val_loss: 0.2116 - val_accuracy: 0.9431 - val_precision_4: 0.9441 - val_recall_4: 0.9374 - val_f2_score: 0.9387
Epoch 90/100
198/198 [==============================] - 2s 8ms/step - loss: 0.1739 - accuracy: 0.9573 - precision_4: 0.9593 - recall_4: 0.9546 - f2_score: 0.9555 - val_loss: 0.2018 - val_accuracy: 0.9474 - val_precision_4: 0.9608 - val_recall_4: 0.9403 - val_f2_score: 0.9443
Epoch 91/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1894 - accuracy: 0.9524 - precision_4: 0.9559 - recall_4: 0.9505 - f2_score: 0.9516 - val_loss: 0.2218 - val_accuracy: 0.9431 - val_precision_4: 0.9482 - val_recall_4: 0.9374 - val_f2_score: 0.9395
Epoch 92/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1747 - accuracy: 0.9574 - precision_4: 0.9602 - recall_4: 0.9541 - f2_score: 0.9553 - val_loss: 0.1886 - val_accuracy: 0.9531 - val_precision_4: 0.9585 - val_recall_4: 0.9516 - val_f2_score: 0.9530
Epoch 93/100
198/198 [==============================] - 2s 12ms/step - loss: 0.1787 - accuracy: 0.9579 - precision_4: 0.9610 - recall_4: 0.9549 - f2_score: 0.9561 - val_loss: 0.2236 - val_accuracy: 0.9346 - val_precision_4: 0.9370 - val_recall_4: 0.9303 - val_f2_score: 0.9316
Epoch 94/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1727 - accuracy: 0.9566 - precision_4: 0.9597 - recall_4: 0.9536 - f2_score: 0.9548 - val_loss: 0.1646 - val_accuracy: 0.9659 - val_precision_4: 0.9672 - val_recall_4: 0.9659 - val_f2_score: 0.9661
Epoch 95/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1755 - accuracy: 0.9559 - precision_4: 0.9586 - recall_4: 0.9530 - f2_score: 0.9541 - val_loss: 0.2132 - val_accuracy: 0.9445 - val_precision_4: 0.9511 - val_recall_4: 0.9403 - val_f2_score: 0.9424
Epoch 96/100
198/198 [==============================] - 2s 12ms/step - loss: 0.1817 - accuracy: 0.9560 - precision_4: 0.9601 - recall_4: 0.9524 - f2_score: 0.9539 - val_loss: 0.1850 - val_accuracy: 0.9516 - val_precision_4: 0.9530 - val_recall_4: 0.9516 - val_f2_score: 0.9519
Epoch 97/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1728 - accuracy: 0.9562 - precision_4: 0.9612 - recall_4: 0.9533 - f2_score: 0.9549 - val_loss: 0.2733 - val_accuracy: 0.9289 - val_precision_4: 0.9312 - val_recall_4: 0.9246 - val_f2_score: 0.9259
Epoch 98/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1821 - accuracy: 0.9538 - precision_4: 0.9581 - recall_4: 0.9511 - f2_score: 0.9525 - val_loss: 0.2036 - val_accuracy: 0.9445 - val_precision_4: 0.9459 - val_recall_4: 0.9445 - val_f2_score: 0.9448
Epoch 99/100
198/198 [==============================] - 2s 9ms/step - loss: 0.1657 - accuracy: 0.9620 - precision_4: 0.9657 - recall_4: 0.9587 - f2_score: 0.9601 - val_loss: 0.1975 - val_accuracy: 0.9488 - val_precision_4: 0.9485 - val_recall_4: 0.9431 - val_f2_score: 0.9442
Epoch 100/100
198/198 [==============================] - 2s 12ms/step - loss: 0.1806 - accuracy: 0.9525 - precision_4: 0.9558 - recall_4: 0.9503 - f2_score: 0.9514 - val_loss: 0.2567 - val_accuracy: 0.9388 - val_precision_4: 0.9425 - val_recall_4: 0.9331 - val_f2_score: 0.9350
In [ ]:
# Plot Training F2 Score
plt.figure(figsize = (10, 4))
plt.subplot(1, 2, 1)
plt.plot(H1A.history['f2_score'], label = 'training')

# Plot Val F2 Score
plt.ylabel('Accuracy %')
plt.title('Training')
plt.plot(H1A.history['val_f2_score'], label = 'validation')
plt.title('F2 Score')
plt.legend()

# Plot Training Loss
plt.subplot(1, 2, 2)
plt.plot(H1A.history['loss'], label = 'training')
plt.ylabel('Training Loss')
plt.xlabel('epochs')

# Plot Val Loss
plt.plot(H1A.history['val_loss'], label = 'validation')
plt.xlabel('epochs')
plt.title('Loss')
plt.legend()
plt.show()

# After training, predict classes on the test set
y_pred = Conv1A.predict(X_test)
y_pred_classes = np.argmax(y_pred, axis=1)
y_true_classes = np.argmax(y_test_encoded, axis=1)

# Generate the confusion matrix
cm = confusion_matrix(y_true_classes, y_pred_classes)

# Plotting the confusion matrix
fig, ax = plt.subplots(figsize=(8, 8))
ax.matshow(cm, cmap=plt.cm.Blues, alpha=0.3)
for i in range(cm.shape[0]):
    for j in range(cm.shape[1]):
        ax.text(x=j, y=i, s=cm[i, j], va='center', ha='center')

plt.xlabel('Predicted labels')
plt.ylabel('True labels')
plt.title('Confusion Matrix')
plt.show()

# Plot ROC and calculate AUC (CNN)
plot_roc_curve(y_test, y_pred, NUM_CLASSES, 'Conv1A')

# Print the scores
print(f"Validation Scores:\n\tF2 Score: {H1A.history['val_f2_score']}\n\tRecall: {H1A.history['val_recall_4']}")
print(f"\tPrecision: {H1A.history['val_precision_4']}\n\tAccuracy: {H1A.history['val_accuracy']}")
No description has been provided for this image
22/22 [==============================] - 0s 3ms/step
No description has been provided for this image
No description has been provided for this image
Validation Scores:
	F2 Score: [0.9147905111312866, 0.9245015382766724, 0.9327827095985413, 0.9181637167930603, 0.910647988319397, 0.9424828290939331, 0.9389269351959229, 0.9404389262199402, 0.9399032592773438, 0.9455219507217407, 0.9353644847869873, 0.9358974099159241, 0.9439066052436829, 0.897691547870636, 0.9350056648254395, 0.9347392320632935, 0.905332088470459, 0.9246575832366943, 0.926822304725647, 0.9321551322937012, 0.9458689093589783, 0.9407069683074951, 0.9470235705375671, 0.92476487159729, 0.9163573980331421, 0.9364310503005981, 0.9516358375549316, 0.9481038451194763, 0.9335804581642151, 0.936164140701294, 0.9489883780479431, 0.9496013522148132, 0.9418637752532959, 0.9487909078598022, 0.9510250687599182, 0.8847031593322754, 0.9390138983726501, 0.9358974099159241, 0.933314323425293, 0.9510250687599182, 0.9443651437759399, 0.9321551322937012, 0.9299344420433044, 0.9181637167930603, 0.9336748719215393, 0.9415954351425171, 0.9342067241668701, 0.9458689093589783, 0.946753978729248, 0.9569923877716064, 0.9458689093589783, 0.9430199861526489, 0.9195894598960876, 0.9375891089439392, 0.9378563165664673, 0.9102563858032227, 0.9367880821228027, 0.9501424431800842, 0.9413272738456726, 0.946408212184906, 0.9342067241668701, 0.9410592317581177, 0.9449828863143921, 0.9533295035362244, 0.9370549917221069, 0.9558404088020325, 0.9532630443572998, 0.9455995559692383, 0.9541441798210144, 0.9595671892166138, 0.936164140701294, 0.9575377702713013, 0.9589627385139465, 0.9372325539588928, 0.9478334784507751, 0.9455995559692383, 0.933314323425293, 0.9336748719215393, 0.9356308579444885, 0.9515668749809265, 0.9458689093589783, 0.927614688873291, 0.9581435322761536, 0.9615384936332703, 0.9327827095985413, 0.9561753869056702, 0.9294050931930542, 0.952720046043396, 0.9387463927268982, 0.9442856907844543, 0.9395493865013123, 0.9529914259910583, 0.9316238760948181, 0.9661354422569275, 0.9424009323120117, 0.951906681060791, 0.9259259700775146, 0.9447922706604004, 0.9441754221916199, 0.9350056648254395]
	Recall: [0.9132290482521057, 0.9231863617897034, 0.9317212104797363, 0.9160739779472351, 0.9075391292572021, 0.941678524017334, 0.9359886050224304, 0.9388335943222046, 0.9388335943222046, 0.9431009888648987, 0.9345661401748657, 0.9345661401748657, 0.9431009888648987, 0.896159291267395, 0.933143675327301, 0.933143675327301, 0.9032716751098633, 0.9217638969421387, 0.9260312914848328, 0.9302987456321716, 0.9445234537124634, 0.9388335943222046, 0.9459459185600281, 0.9231863617897034, 0.9132290482521057, 0.9345661401748657, 0.9516358375549316, 0.9459459185600281, 0.9317212104797363, 0.9345661401748657, 0.9473684430122375, 0.9487909078598022, 0.9402560591697693, 0.9487909078598022, 0.9502133727073669, 0.8819345831871033, 0.9374110698699951, 0.9345661401748657, 0.9317212104797363, 0.9502133727073669, 0.941678524017334, 0.9302987456321716, 0.9288762211799622, 0.9160739779472351, 0.933143675327301, 0.9402560591697693, 0.933143675327301, 0.9445234537124634, 0.9459459185600281, 0.9559032917022705, 0.9445234537124634, 0.941678524017334, 0.9174964427947998, 0.9359886050224304, 0.9359886050224304, 0.9089615941047668, 0.9359886050224304, 0.9487909078598022, 0.9402560591697693, 0.9445234537124634, 0.933143675327301, 0.9402560591697693, 0.9431009888648987, 0.9530583024024963, 0.9359886050224304, 0.954480767250061, 0.9516358375549316, 0.9445234537124634, 0.9530583024024963, 0.9587482213973999, 0.9345661401748657, 0.9559032917022705, 0.9573257565498352, 0.9345661401748657, 0.9459459185600281, 0.9445234537124634, 0.9317212104797363, 0.933143675327301, 0.9345661401748657, 0.9502133727073669, 0.9445234537124634, 0.9260312914848328, 0.9573257565498352, 0.9601706862449646, 0.9317212104797363, 0.9559032917022705, 0.9288762211799622, 0.9516358375549316, 0.9374110698699951, 0.9402560591697693, 0.9374110698699951, 0.9516358375549316, 0.9302987456321716, 0.9658606052398682, 0.9402560591697693, 0.9516358375549316, 0.9246088266372681, 0.9445234537124634, 0.9431009888648987, 0.933143675327301]
	Precision: [0.9210903644561768, 0.9297994375228882, 0.9370529055595398, 0.9266186952590942, 0.9232995510101318, 0.9457142949104309, 0.9508670568466187, 0.946915328502655, 0.9442059993743896, 0.9553313851356506, 0.9385714530944824, 0.9412607550621033, 0.9471428394317627, 0.9038737416267395, 0.9425287246704102, 0.9411764740943909, 0.9136690497398376, 0.9364162087440491, 0.9300000071525574, 0.9396551847457886, 0.9512894153594971, 0.9482758641242981, 0.9513590931892395, 0.9311334490776062, 0.929088294506073, 0.943965494632721, 0.9516358375549316, 0.9568345546722412, 0.9410919547080994, 0.9426112174987793, 0.955523669719696, 0.9528571367263794, 0.9483500719070435, 0.9487909078598022, 0.954285740852356, 0.8959537744522095, 0.9454806447029114, 0.9412607550621033, 0.9397417306900024, 0.954285740852356, 0.9552669525146484, 0.9396551847457886, 0.9341917037963867, 0.9266186952590942, 0.9358059763908386, 0.9469913840293884, 0.9384835362434387, 0.9512894153594971, 0.949999988079071, 0.9613733887672424, 0.9512894153594971, 0.9484240412712097, 0.9280575513839722, 0.944045901298523, 0.9454023241996765, 0.9154728055000305, 0.9399999976158142, 0.9555873870849609, 0.9456366300582886, 0.954023003578186, 0.9384835362434387, 0.9442856907844543, 0.9525862336158752, 0.9544159770011902, 0.9413447976112366, 0.9613180756568909, 0.9598278403282166, 0.9499284625053406, 0.9585121870040894, 0.9628571271896362, 0.9426112174987793, 0.9641320109367371, 0.9655666947364807, 0.948051929473877, 0.9554597735404968, 0.9499284625053406, 0.9397417306900024, 0.9358059763908386, 0.9399141669273376, 0.9570200443267822, 0.9512894153594971, 0.9340028762817383, 0.9614285826683044, 0.967048704624176, 0.9370529055595398, 0.9572649598121643, 0.9315263628959656, 0.9570815563201904, 0.9441260695457458, 0.9607558250427246, 0.9482014179229736, 0.9584527015686035, 0.9369627237319946, 0.9672364592552185, 0.9510791301727295, 0.9529914259910583, 0.9312320947647095, 0.945868968963623, 0.9484978318214417, 0.9425287246704102]
	Accuracy: [0.9174964427947998, 0.9260312914848328, 0.9317212104797363, 0.9231863617897034, 0.9189189076423645, 0.941678524017334, 0.9445234537124634, 0.9402560591697693, 0.9402560591697693, 0.9487909078598022, 0.9345661401748657, 0.9359886050224304, 0.9445234537124634, 0.8990042805671692, 0.9374110698699951, 0.9402560591697693, 0.9061166644096375, 0.9317212104797363, 0.9274537563323975, 0.9359886050224304, 0.9502133727073669, 0.9388335943222046, 0.9487909078598022, 0.9274537563323975, 0.9203413724899292, 0.9374110698699951, 0.9516358375549316, 0.9487909078598022, 0.9359886050224304, 0.9388335943222046, 0.9530583024024963, 0.9530583024024963, 0.9431009888648987, 0.9487909078598022, 0.954480767250061, 0.8876244425773621, 0.9402560591697693, 0.9359886050224304, 0.9345661401748657, 0.9530583024024963, 0.9487909078598022, 0.9345661401748657, 0.9288762211799622, 0.9189189076423645, 0.9359886050224304, 0.941678524017334, 0.9345661401748657, 0.9487909078598022, 0.9473684430122375, 0.9587482213973999, 0.9487909078598022, 0.9445234537124634, 0.9217638969421387, 0.9388335943222046, 0.9402560591697693, 0.9132290482521057, 0.9359886050224304, 0.9516358375549316, 0.941678524017334, 0.9502133727073669, 0.9359886050224304, 0.941678524017334, 0.9473684430122375, 0.954480767250061, 0.9374110698699951, 0.9559032917022705, 0.9559032917022705, 0.9473684430122375, 0.954480767250061, 0.9587482213973999, 0.9359886050224304, 0.9601706862449646, 0.9630156755447388, 0.941678524017334, 0.9516358375549316, 0.9445234537124634, 0.9345661401748657, 0.933143675327301, 0.9388335943222046, 0.9516358375549316, 0.9487909078598022, 0.9302987456321716, 0.9615931510925293, 0.9630156755447388, 0.9317212104797363, 0.9573257565498352, 0.9288762211799622, 0.9559032917022705, 0.9431009888648987, 0.9473684430122375, 0.9431009888648987, 0.9530583024024963, 0.9345661401748657, 0.9658606052398682, 0.9445234537124634, 0.9516358375549316, 0.9288762211799622, 0.9445234537124634, 0.9487909078598022, 0.9388335943222046]
  • F2 Score: 0.9350
  • Recall: 0.9331
  • Precision: 0.9425
  • Accuracy: 0.9388

The confusion matrix for the brain tumor classification task indicates that the model has very few instances where it incorrectly predicts the absence of a tumor (false negatives) when there is one present (true positives), which is reflected in the high recall score of 0.9331.

Examining the confusion matrix more closely for false negatives:

  • Glioma (0): There are no instances where Glioma is incorrectly predicted as 'No Tumor'.

  • Meningioma (1): There are 5 instances where Meningioma is incorrectly predicted as 'No Tumor', which could have significant implications for patients if these were real-world diagnoses, as Meningioma tumors could go untreated.

  • Pituitary (3): Similarly, there are 2 instances where a Pituitary tumor is misclassified as 'No Tumor'. Although fewer in number compared to Meningioma, these errors are still critical as they represent missed diagnoses.

The 'No Tumor' predictions are accurate and there are no instances where a tumor is present but the model predicts 'No Tumor', which is an ideal scenario for medical diagnosis models because it avoids missing a diagnosis of a condition that requires treatment.

The model has demonstrated a high degree of accuracy in distinguishing between four different classes of brain tumor-related conditions: Glioma, Meningioma, No Tumor, and Pituitary. An F2 score of 0.9350 indicates that the model places more emphasis on recall than precision, meaning it is more focused on minimizing false negatives. This is particularly important in medical diagnoses where failing to detect an actual condition (a false negative) is usually more critical than incorrectly diagnosing a condition (a false positive).

In [ ]:
# Parameters
f = 32          # No. Filters
l = 4           # No. Layers
k = 3           # Kernel Size (k x k)
lam = 0.000001  # Kernel Regularization Constant (L2)

# Initialize Sequential Network
Conv1B = Sequential()

# Add Augmentations Directly
# Horizontal Flip, 10% Rotation, 10% Move, Brightness / Contrast Adjust 
Conv1B.add( RandomFlip("horizontal") )
Conv1B.add( RandomRotation(0.1) )
Conv1B.add( RandomTranslation(height_factor = 0.1, width_factor = 0.1) )
Conv1B.add( RandomBrightness(factor = 0.1, value_range = (0.0, 1.0)) )
Conv1B.add( RandomContrast(0.1) ) 

# Add Multiple Layers (Changeable)
for i in range(l):
  
    # Add Convolutional Layer, Follow With Pooling
    # Note: Max Filter 1024 Following Unet Architecture
    Conv1B.add(Conv2D(filters = (f * 2 ** i),
                    input_shape = (IMG_SIZE, IMG_SIZE, 1),
                    kernel_size = (k, k), 
                    kernel_regularizer = l2(lam),
                    kernel_initializer = 'he_uniform',
                    padding = 'same', 
                    activation = 'relu',
                    data_format = 'channels_last'))
    Conv1B.add(MaxPooling2D(pool_size = (2, 2), data_format = 'channels_last'))

# Flatten After Convolutional Layers
Conv1B.add(Flatten())

# Dropout Regularization Unnecessary
Conv1B.add(Dense(NUM_CLASSES, activation = 'softmax', 
                kernel_initializer = 'glorot_uniform',
                kernel_regularizer = l2(lam)
                ))
In [ ]:
# Train With CC, Adam
Conv1B.compile(loss = 'categorical_crossentropy',
               optimizer = 'adam',
               metrics = ['accuracy', Precision(), Recall(), F2Score()])

# Build Model With Basic Parameters (Build For Grayscale Images)
Conv1B.build((None, IMG_SIZE, IMG_SIZE, 1))
Conv1B.summary()

# Fit Model (High Patience For Full Convergence)
H1B = Conv1B.fit(X_train, y_train_encoded, 
          batch_size = 128,
          epochs = 100, 
          verbose = 1,
          validation_data = (X_test, y_test_encoded))
Model: "sequential_2"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 random_flip_2 (RandomFlip)  (None, 256, 256, 1)       0         
                                                                 
 random_rotation_2 (RandomR  (None, 256, 256, 1)       0         
 otation)                                                        
                                                                 
 random_translation_2 (Rand  (None, 256, 256, 1)       0         
 omTranslation)                                                  
                                                                 
 random_brightness_2 (Rando  (None, 256, 256, 1)       0         
 mBrightness)                                                    
                                                                 
 random_contrast_2 (RandomC  (None, 256, 256, 1)       0         
 ontrast)                                                        
                                                                 
 conv2d_8 (Conv2D)           (None, 256, 256, 32)      320       
                                                                 
 max_pooling2d_8 (MaxPoolin  (None, 128, 128, 32)      0         
 g2D)                                                            
                                                                 
 conv2d_9 (Conv2D)           (None, 128, 128, 64)      18496     
                                                                 
 max_pooling2d_9 (MaxPoolin  (None, 64, 64, 64)        0         
 g2D)                                                            
                                                                 
 conv2d_10 (Conv2D)          (None, 64, 64, 128)       73856     
                                                                 
 max_pooling2d_10 (MaxPooli  (None, 32, 32, 128)       0         
 ng2D)                                                           
                                                                 
 conv2d_11 (Conv2D)          (None, 32, 32, 256)       295168    
                                                                 
 max_pooling2d_11 (MaxPooli  (None, 16, 16, 256)       0         
 ng2D)                                                           
                                                                 
 flatten_2 (Flatten)         (None, 65536)             0         
                                                                 
 dense_2 (Dense)             (None, 4)                 262148    
                                                                 
=================================================================
Total params: 649988 (2.48 MB)
Trainable params: 649988 (2.48 MB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
Epoch 1/100
50/50 [==============================] - 7s 71ms/step - loss: 1.7118 - accuracy: 0.4880 - precision_6: 0.5926 - recall_6: 0.2769 - f2_score: 0.3099 - val_loss: 0.8160 - val_accuracy: 0.6615 - val_precision_6: 0.7965 - val_recall_6: 0.4509 - val_f2_score: 0.4938
Epoch 2/100
50/50 [==============================] - 3s 53ms/step - loss: 0.8153 - accuracy: 0.6570 - precision_6: 0.7307 - recall_6: 0.5318 - f2_score: 0.5624 - val_loss: 0.9183 - val_accuracy: 0.6743 - val_precision_6: 0.6931 - val_recall_6: 0.6458 - val_f2_score: 0.6547
Epoch 3/100
50/50 [==============================] - 2s 48ms/step - loss: 0.7480 - accuracy: 0.6935 - precision_6: 0.7582 - recall_6: 0.5866 - f2_score: 0.6144 - val_loss: 0.6404 - val_accuracy: 0.7468 - val_precision_6: 0.8112 - val_recall_6: 0.6600 - val_f2_score: 0.6856
Epoch 4/100
50/50 [==============================] - 2s 41ms/step - loss: 0.6282 - accuracy: 0.7475 - precision_6: 0.7951 - recall_6: 0.6869 - f2_score: 0.7061 - val_loss: 0.5437 - val_accuracy: 0.7838 - val_precision_6: 0.8069 - val_recall_6: 0.7368 - val_f2_score: 0.7499
Epoch 5/100
50/50 [==============================] - 3s 51ms/step - loss: 0.6216 - accuracy: 0.7557 - precision_6: 0.7999 - recall_6: 0.6981 - f2_score: 0.7163 - val_loss: 0.4994 - val_accuracy: 0.8094 - val_precision_6: 0.8348 - val_recall_6: 0.7838 - val_f2_score: 0.7935
Epoch 6/100
50/50 [==============================] - 2s 43ms/step - loss: 0.5582 - accuracy: 0.7834 - precision_6: 0.8145 - recall_6: 0.7364 - f2_score: 0.7508 - val_loss: 0.4286 - val_accuracy: 0.8421 - val_precision_6: 0.8622 - val_recall_6: 0.7923 - val_f2_score: 0.8054
Epoch 7/100
50/50 [==============================] - 2s 43ms/step - loss: 0.5165 - accuracy: 0.8047 - precision_6: 0.8310 - recall_6: 0.7666 - f2_score: 0.7787 - val_loss: 0.4658 - val_accuracy: 0.8378 - val_precision_6: 0.8582 - val_recall_6: 0.8094 - val_f2_score: 0.8187
Epoch 8/100
50/50 [==============================] - 2s 49ms/step - loss: 0.4536 - accuracy: 0.8266 - precision_6: 0.8497 - recall_6: 0.7962 - f2_score: 0.8064 - val_loss: 0.4473 - val_accuracy: 0.8208 - val_precision_6: 0.8438 - val_recall_6: 0.7994 - val_f2_score: 0.8079
Epoch 9/100
50/50 [==============================] - 2s 41ms/step - loss: 0.4297 - accuracy: 0.8400 - precision_6: 0.8607 - recall_6: 0.8146 - f2_score: 0.8234 - val_loss: 0.5554 - val_accuracy: 0.7824 - val_precision_6: 0.8264 - val_recall_6: 0.7653 - val_f2_score: 0.7768
Epoch 10/100
50/50 [==============================] - 2s 41ms/step - loss: 0.4364 - accuracy: 0.8321 - precision_6: 0.8535 - recall_6: 0.8065 - f2_score: 0.8155 - val_loss: 0.4060 - val_accuracy: 0.8606 - val_precision_6: 0.8782 - val_recall_6: 0.8307 - val_f2_score: 0.8398
Epoch 11/100
50/50 [==============================] - 3s 53ms/step - loss: 0.3841 - accuracy: 0.8578 - precision_6: 0.8736 - recall_6: 0.8323 - f2_score: 0.8402 - val_loss: 0.3741 - val_accuracy: 0.8649 - val_precision_6: 0.8744 - val_recall_6: 0.8321 - val_f2_score: 0.8403
Epoch 12/100
50/50 [==============================] - 2s 41ms/step - loss: 0.3595 - accuracy: 0.8649 - precision_6: 0.8791 - recall_6: 0.8494 - f2_score: 0.8552 - val_loss: 0.4714 - val_accuracy: 0.8137 - val_precision_6: 0.8210 - val_recall_6: 0.8023 - val_f2_score: 0.8059
Epoch 13/100
50/50 [==============================] - 2s 42ms/step - loss: 0.3671 - accuracy: 0.8606 - precision_6: 0.8755 - recall_6: 0.8415 - f2_score: 0.8481 - val_loss: 0.2909 - val_accuracy: 0.8933 - val_precision_6: 0.9108 - val_recall_6: 0.8862 - val_f2_score: 0.8910
Epoch 14/100
50/50 [==============================] - 3s 55ms/step - loss: 0.3437 - accuracy: 0.8687 - precision_6: 0.8821 - recall_6: 0.8533 - f2_score: 0.8589 - val_loss: 0.3375 - val_accuracy: 0.8762 - val_precision_6: 0.8882 - val_recall_6: 0.8592 - val_f2_score: 0.8648
Epoch 15/100
50/50 [==============================] - 2s 42ms/step - loss: 0.3408 - accuracy: 0.8799 - precision_6: 0.8928 - recall_6: 0.8644 - f2_score: 0.8699 - val_loss: 0.3297 - val_accuracy: 0.8748 - val_precision_6: 0.8889 - val_recall_6: 0.8535 - val_f2_score: 0.8603
Epoch 16/100
50/50 [==============================] - 2s 41ms/step - loss: 0.3026 - accuracy: 0.8880 - precision_6: 0.8994 - recall_6: 0.8756 - f2_score: 0.8803 - val_loss: 0.3160 - val_accuracy: 0.8791 - val_precision_6: 0.8876 - val_recall_6: 0.8762 - val_f2_score: 0.8785
Epoch 17/100
50/50 [==============================] - 3s 51ms/step - loss: 0.3240 - accuracy: 0.8799 - precision_6: 0.8899 - recall_6: 0.8661 - f2_score: 0.8708 - val_loss: 0.2695 - val_accuracy: 0.9018 - val_precision_6: 0.9098 - val_recall_6: 0.8890 - val_f2_score: 0.8931
Epoch 18/100
50/50 [==============================] - 2s 41ms/step - loss: 0.2707 - accuracy: 0.9028 - precision_6: 0.9120 - recall_6: 0.8918 - f2_score: 0.8957 - val_loss: 0.2786 - val_accuracy: 0.8990 - val_precision_6: 0.9057 - val_recall_6: 0.8876 - val_f2_score: 0.8912
Epoch 19/100
50/50 [==============================] - 2s 45ms/step - loss: 0.2915 - accuracy: 0.8927 - precision_6: 0.9023 - recall_6: 0.8809 - f2_score: 0.8851 - val_loss: 0.2513 - val_accuracy: 0.9061 - val_precision_6: 0.9105 - val_recall_6: 0.8976 - val_f2_score: 0.9001
Epoch 20/100
50/50 [==============================] - 2s 49ms/step - loss: 0.2735 - accuracy: 0.8957 - precision_6: 0.9075 - recall_6: 0.8859 - f2_score: 0.8901 - val_loss: 0.2839 - val_accuracy: 0.8990 - val_precision_6: 0.9009 - val_recall_6: 0.8919 - val_f2_score: 0.8937
Epoch 21/100
50/50 [==============================] - 2s 42ms/step - loss: 0.2339 - accuracy: 0.9152 - precision_6: 0.9228 - recall_6: 0.9079 - f2_score: 0.9109 - val_loss: 0.2057 - val_accuracy: 0.9175 - val_precision_6: 0.9277 - val_recall_6: 0.9132 - val_f2_score: 0.9161
Epoch 22/100
50/50 [==============================] - 2s 46ms/step - loss: 0.2250 - accuracy: 0.9174 - precision_6: 0.9253 - recall_6: 0.9092 - f2_score: 0.9124 - val_loss: 0.1822 - val_accuracy: 0.9331 - val_precision_6: 0.9343 - val_recall_6: 0.9303 - val_f2_score: 0.9311
Epoch 23/100
50/50 [==============================] - 2s 47ms/step - loss: 0.2479 - accuracy: 0.9106 - precision_6: 0.9183 - recall_6: 0.9003 - f2_score: 0.9039 - val_loss: 0.2034 - val_accuracy: 0.9232 - val_precision_6: 0.9296 - val_recall_6: 0.9203 - val_f2_score: 0.9222
Epoch 24/100
50/50 [==============================] - 2s 41ms/step - loss: 0.2266 - accuracy: 0.9144 - precision_6: 0.9232 - recall_6: 0.9073 - f2_score: 0.9104 - val_loss: 0.1936 - val_accuracy: 0.9346 - val_precision_6: 0.9396 - val_recall_6: 0.9289 - val_f2_score: 0.9310
Epoch 25/100
50/50 [==============================] - 2s 41ms/step - loss: 0.2139 - accuracy: 0.9222 - precision_6: 0.9285 - recall_6: 0.9139 - f2_score: 0.9168 - val_loss: 0.2170 - val_accuracy: 0.9189 - val_precision_6: 0.9265 - val_recall_6: 0.9147 - val_f2_score: 0.9170
Epoch 26/100
50/50 [==============================] - 3s 53ms/step - loss: 0.1970 - accuracy: 0.9266 - precision_6: 0.9326 - recall_6: 0.9199 - f2_score: 0.9224 - val_loss: 0.1961 - val_accuracy: 0.9346 - val_precision_6: 0.9370 - val_recall_6: 0.9303 - val_f2_score: 0.9316
Epoch 27/100
50/50 [==============================] - 2s 41ms/step - loss: 0.2058 - accuracy: 0.9277 - precision_6: 0.9341 - recall_6: 0.9210 - f2_score: 0.9236 - val_loss: 0.1545 - val_accuracy: 0.9431 - val_precision_6: 0.9467 - val_recall_6: 0.9346 - val_f2_score: 0.9370
Epoch 28/100
50/50 [==============================] - 2s 41ms/step - loss: 0.1818 - accuracy: 0.9342 - precision_6: 0.9417 - recall_6: 0.9297 - f2_score: 0.9321 - val_loss: 0.1216 - val_accuracy: 0.9602 - val_precision_6: 0.9642 - val_recall_6: 0.9573 - val_f2_score: 0.9587
Epoch 29/100
50/50 [==============================] - 3s 54ms/step - loss: 0.1967 - accuracy: 0.9267 - precision_6: 0.9324 - recall_6: 0.9191 - f2_score: 0.9218 - val_loss: 0.1722 - val_accuracy: 0.9360 - val_precision_6: 0.9424 - val_recall_6: 0.9303 - val_f2_score: 0.9327
Epoch 30/100
50/50 [==============================] - 2s 41ms/step - loss: 0.2141 - accuracy: 0.9215 - precision_6: 0.9282 - recall_6: 0.9128 - f2_score: 0.9159 - val_loss: 0.2042 - val_accuracy: 0.9260 - val_precision_6: 0.9365 - val_recall_6: 0.9232 - val_f2_score: 0.9258
Epoch 31/100
50/50 [==============================] - 2s 43ms/step - loss: 0.1803 - accuracy: 0.9337 - precision_6: 0.9391 - recall_6: 0.9266 - f2_score: 0.9291 - val_loss: 0.1474 - val_accuracy: 0.9474 - val_precision_6: 0.9483 - val_recall_6: 0.9388 - val_f2_score: 0.9407
Epoch 32/100
50/50 [==============================] - 3s 51ms/step - loss: 0.1683 - accuracy: 0.9378 - precision_6: 0.9436 - recall_6: 0.9339 - f2_score: 0.9358 - val_loss: 0.1774 - val_accuracy: 0.9346 - val_precision_6: 0.9398 - val_recall_6: 0.9331 - val_f2_score: 0.9345
Epoch 33/100
50/50 [==============================] - 2s 44ms/step - loss: 0.1754 - accuracy: 0.9340 - precision_6: 0.9401 - recall_6: 0.9294 - f2_score: 0.9316 - val_loss: 0.1422 - val_accuracy: 0.9531 - val_precision_6: 0.9557 - val_recall_6: 0.9516 - val_f2_score: 0.9524
Epoch 34/100
50/50 [==============================] - 2s 42ms/step - loss: 0.1649 - accuracy: 0.9419 - precision_6: 0.9441 - recall_6: 0.9354 - f2_score: 0.9372 - val_loss: 0.1254 - val_accuracy: 0.9559 - val_precision_6: 0.9600 - val_recall_6: 0.9559 - val_f2_score: 0.9567
Epoch 35/100
50/50 [==============================] - 2s 49ms/step - loss: 0.1649 - accuracy: 0.9430 - precision_6: 0.9467 - recall_6: 0.9389 - f2_score: 0.9405 - val_loss: 0.1172 - val_accuracy: 0.9602 - val_precision_6: 0.9642 - val_recall_6: 0.9587 - val_f2_score: 0.9598
Epoch 36/100
50/50 [==============================] - 2s 42ms/step - loss: 0.1670 - accuracy: 0.9405 - precision_6: 0.9463 - recall_6: 0.9345 - f2_score: 0.9368 - val_loss: 0.1169 - val_accuracy: 0.9602 - val_precision_6: 0.9629 - val_recall_6: 0.9587 - val_f2_score: 0.9596
Epoch 37/100
50/50 [==============================] - 2s 43ms/step - loss: 0.1472 - accuracy: 0.9472 - precision_6: 0.9509 - recall_6: 0.9441 - f2_score: 0.9455 - val_loss: 0.1207 - val_accuracy: 0.9587 - val_precision_6: 0.9601 - val_recall_6: 0.9573 - val_f2_score: 0.9579
Epoch 38/100
50/50 [==============================] - 2s 49ms/step - loss: 0.1649 - accuracy: 0.9410 - precision_6: 0.9467 - recall_6: 0.9354 - f2_score: 0.9377 - val_loss: 0.1537 - val_accuracy: 0.9431 - val_precision_6: 0.9430 - val_recall_6: 0.9417 - val_f2_score: 0.9419
Epoch 39/100
50/50 [==============================] - 2s 40ms/step - loss: 0.1506 - accuracy: 0.9500 - precision_6: 0.9552 - recall_6: 0.9475 - f2_score: 0.9490 - val_loss: 0.1625 - val_accuracy: 0.9531 - val_precision_6: 0.9557 - val_recall_6: 0.9502 - val_f2_score: 0.9513
Epoch 40/100
50/50 [==============================] - 2s 41ms/step - loss: 0.1624 - accuracy: 0.9437 - precision_6: 0.9477 - recall_6: 0.9397 - f2_score: 0.9413 - val_loss: 0.1513 - val_accuracy: 0.9417 - val_precision_6: 0.9484 - val_recall_6: 0.9403 - val_f2_score: 0.9419
Epoch 41/100
50/50 [==============================] - 3s 54ms/step - loss: 0.1463 - accuracy: 0.9494 - precision_6: 0.9532 - recall_6: 0.9443 - f2_score: 0.9461 - val_loss: 0.1438 - val_accuracy: 0.9488 - val_precision_6: 0.9527 - val_recall_6: 0.9445 - val_f2_score: 0.9461
Epoch 42/100
50/50 [==============================] - 2s 45ms/step - loss: 0.1441 - accuracy: 0.9506 - precision_6: 0.9565 - recall_6: 0.9454 - f2_score: 0.9476 - val_loss: 0.1147 - val_accuracy: 0.9587 - val_precision_6: 0.9613 - val_recall_6: 0.9531 - val_f2_score: 0.9547
Epoch 43/100
50/50 [==============================] - 2s 41ms/step - loss: 0.1252 - accuracy: 0.9521 - precision_6: 0.9564 - recall_6: 0.9486 - f2_score: 0.9501 - val_loss: 0.0910 - val_accuracy: 0.9701 - val_precision_6: 0.9741 - val_recall_6: 0.9644 - val_f2_score: 0.9664
Epoch 44/100
50/50 [==============================] - 2s 49ms/step - loss: 0.1320 - accuracy: 0.9538 - precision_6: 0.9559 - recall_6: 0.9494 - f2_score: 0.9507 - val_loss: 0.1008 - val_accuracy: 0.9616 - val_precision_6: 0.9657 - val_recall_6: 0.9602 - val_f2_score: 0.9613
Epoch 45/100
50/50 [==============================] - 2s 41ms/step - loss: 0.1318 - accuracy: 0.9535 - precision_6: 0.9559 - recall_6: 0.9500 - f2_score: 0.9512 - val_loss: 0.1075 - val_accuracy: 0.9673 - val_precision_6: 0.9672 - val_recall_6: 0.9659 - val_f2_score: 0.9661
Epoch 46/100
50/50 [==============================] - 2s 46ms/step - loss: 0.1251 - accuracy: 0.9517 - precision_6: 0.9558 - recall_6: 0.9481 - f2_score: 0.9496 - val_loss: 0.1033 - val_accuracy: 0.9644 - val_precision_6: 0.9658 - val_recall_6: 0.9644 - val_f2_score: 0.9647
Epoch 47/100
50/50 [==============================] - 2s 49ms/step - loss: 0.1311 - accuracy: 0.9498 - precision_6: 0.9524 - recall_6: 0.9473 - f2_score: 0.9483 - val_loss: 0.0975 - val_accuracy: 0.9687 - val_precision_6: 0.9700 - val_recall_6: 0.9673 - val_f2_score: 0.9678
Epoch 48/100
50/50 [==============================] - 2s 47ms/step - loss: 0.1263 - accuracy: 0.9562 - precision_6: 0.9581 - recall_6: 0.9524 - f2_score: 0.9535 - val_loss: 0.0862 - val_accuracy: 0.9673 - val_precision_6: 0.9672 - val_recall_6: 0.9659 - val_f2_score: 0.9661
Epoch 49/100
50/50 [==============================] - 2s 44ms/step - loss: 0.1215 - accuracy: 0.9571 - precision_6: 0.9589 - recall_6: 0.9535 - f2_score: 0.9546 - val_loss: 0.0835 - val_accuracy: 0.9701 - val_precision_6: 0.9728 - val_recall_6: 0.9673 - val_f2_score: 0.9684
Epoch 50/100
50/50 [==============================] - 3s 51ms/step - loss: 0.1144 - accuracy: 0.9581 - precision_6: 0.9608 - recall_6: 0.9549 - f2_score: 0.9561 - val_loss: 0.0858 - val_accuracy: 0.9730 - val_precision_6: 0.9744 - val_recall_6: 0.9730 - val_f2_score: 0.9732
Epoch 51/100
50/50 [==============================] - 2s 40ms/step - loss: 0.1201 - accuracy: 0.9579 - precision_6: 0.9615 - recall_6: 0.9559 - f2_score: 0.9570 - val_loss: 0.0822 - val_accuracy: 0.9687 - val_precision_6: 0.9686 - val_recall_6: 0.9659 - val_f2_score: 0.9664
Epoch 52/100
50/50 [==============================] - 2s 46ms/step - loss: 0.1198 - accuracy: 0.9585 - precision_6: 0.9615 - recall_6: 0.9551 - f2_score: 0.9563 - val_loss: 0.1188 - val_accuracy: 0.9559 - val_precision_6: 0.9585 - val_recall_6: 0.9531 - val_f2_score: 0.9541
Epoch 53/100
50/50 [==============================] - 2s 45ms/step - loss: 0.1162 - accuracy: 0.9581 - precision_6: 0.9611 - recall_6: 0.9540 - f2_score: 0.9554 - val_loss: 0.0938 - val_accuracy: 0.9644 - val_precision_6: 0.9671 - val_recall_6: 0.9616 - val_f2_score: 0.9627
Epoch 54/100
50/50 [==============================] - 2s 48ms/step - loss: 0.1179 - accuracy: 0.9579 - precision_6: 0.9604 - recall_6: 0.9557 - f2_score: 0.9566 - val_loss: 0.1312 - val_accuracy: 0.9602 - val_precision_6: 0.9641 - val_recall_6: 0.9559 - val_f2_score: 0.9575
Epoch 55/100
50/50 [==============================] - 2s 43ms/step - loss: 0.1114 - accuracy: 0.9617 - precision_6: 0.9636 - recall_6: 0.9589 - f2_score: 0.9598 - val_loss: 0.1016 - val_accuracy: 0.9644 - val_precision_6: 0.9658 - val_recall_6: 0.9630 - val_f2_score: 0.9636
Epoch 56/100
50/50 [==============================] - 3s 52ms/step - loss: 0.0998 - accuracy: 0.9668 - precision_6: 0.9695 - recall_6: 0.9658 - f2_score: 0.9666 - val_loss: 0.0921 - val_accuracy: 0.9616 - val_precision_6: 0.9643 - val_recall_6: 0.9602 - val_f2_score: 0.9610
Epoch 57/100
50/50 [==============================] - 2s 43ms/step - loss: 0.1092 - accuracy: 0.9612 - precision_6: 0.9640 - recall_6: 0.9587 - f2_score: 0.9598 - val_loss: 0.0778 - val_accuracy: 0.9659 - val_precision_6: 0.9658 - val_recall_6: 0.9644 - val_f2_score: 0.9647
Epoch 58/100
50/50 [==============================] - 2s 49ms/step - loss: 0.1025 - accuracy: 0.9638 - precision_6: 0.9653 - recall_6: 0.9606 - f2_score: 0.9615 - val_loss: 0.0892 - val_accuracy: 0.9687 - val_precision_6: 0.9700 - val_recall_6: 0.9673 - val_f2_score: 0.9678
Epoch 59/100
50/50 [==============================] - 2s 45ms/step - loss: 0.1032 - accuracy: 0.9644 - precision_6: 0.9668 - recall_6: 0.9622 - f2_score: 0.9631 - val_loss: 0.1196 - val_accuracy: 0.9616 - val_precision_6: 0.9643 - val_recall_6: 0.9602 - val_f2_score: 0.9610
Epoch 60/100
50/50 [==============================] - 3s 55ms/step - loss: 0.1092 - accuracy: 0.9636 - precision_6: 0.9657 - recall_6: 0.9614 - f2_score: 0.9622 - val_loss: 0.0798 - val_accuracy: 0.9701 - val_precision_6: 0.9715 - val_recall_6: 0.9687 - val_f2_score: 0.9693
Epoch 61/100
50/50 [==============================] - 3s 63ms/step - loss: 0.0987 - accuracy: 0.9622 - precision_6: 0.9652 - recall_6: 0.9603 - f2_score: 0.9613 - val_loss: 0.1237 - val_accuracy: 0.9474 - val_precision_6: 0.9487 - val_recall_6: 0.9474 - val_f2_score: 0.9476
Epoch 62/100
50/50 [==============================] - 2s 48ms/step - loss: 0.0966 - accuracy: 0.9658 - precision_6: 0.9685 - recall_6: 0.9636 - f2_score: 0.9646 - val_loss: 0.1240 - val_accuracy: 0.9559 - val_precision_6: 0.9573 - val_recall_6: 0.9559 - val_f2_score: 0.9562
Epoch 63/100
50/50 [==============================] - 2s 46ms/step - loss: 0.0952 - accuracy: 0.9682 - precision_6: 0.9692 - recall_6: 0.9663 - f2_score: 0.9669 - val_loss: 0.1456 - val_accuracy: 0.9531 - val_precision_6: 0.9544 - val_recall_6: 0.9516 - val_f2_score: 0.9522
Epoch 64/100
50/50 [==============================] - 2s 46ms/step - loss: 0.1022 - accuracy: 0.9623 - precision_6: 0.9639 - recall_6: 0.9601 - f2_score: 0.9609 - val_loss: 0.0796 - val_accuracy: 0.9716 - val_precision_6: 0.9729 - val_recall_6: 0.9716 - val_f2_score: 0.9718
Epoch 65/100
50/50 [==============================] - 3s 51ms/step - loss: 0.1024 - accuracy: 0.9642 - precision_6: 0.9660 - recall_6: 0.9620 - f2_score: 0.9628 - val_loss: 0.0884 - val_accuracy: 0.9659 - val_precision_6: 0.9658 - val_recall_6: 0.9644 - val_f2_score: 0.9647
Epoch 66/100
50/50 [==============================] - 2s 45ms/step - loss: 0.0950 - accuracy: 0.9657 - precision_6: 0.9670 - recall_6: 0.9641 - f2_score: 0.9647 - val_loss: 0.0914 - val_accuracy: 0.9701 - val_precision_6: 0.9715 - val_recall_6: 0.9701 - val_f2_score: 0.9704
Epoch 67/100
50/50 [==============================] - 3s 55ms/step - loss: 0.0987 - accuracy: 0.9652 - precision_6: 0.9679 - recall_6: 0.9630 - f2_score: 0.9640 - val_loss: 0.1249 - val_accuracy: 0.9587 - val_precision_6: 0.9587 - val_recall_6: 0.9573 - val_f2_score: 0.9576
Epoch 68/100
50/50 [==============================] - 2s 46ms/step - loss: 0.0886 - accuracy: 0.9704 - precision_6: 0.9727 - recall_6: 0.9691 - f2_score: 0.9699 - val_loss: 0.1019 - val_accuracy: 0.9716 - val_precision_6: 0.9715 - val_recall_6: 0.9701 - val_f2_score: 0.9704
Epoch 69/100
50/50 [==============================] - 3s 62ms/step - loss: 0.0901 - accuracy: 0.9665 - precision_6: 0.9678 - recall_6: 0.9657 - f2_score: 0.9661 - val_loss: 0.1221 - val_accuracy: 0.9587 - val_precision_6: 0.9600 - val_recall_6: 0.9559 - val_f2_score: 0.9567
Epoch 70/100
50/50 [==============================] - 3s 52ms/step - loss: 0.0834 - accuracy: 0.9693 - precision_6: 0.9719 - recall_6: 0.9677 - f2_score: 0.9685 - val_loss: 0.0926 - val_accuracy: 0.9659 - val_precision_6: 0.9658 - val_recall_6: 0.9644 - val_f2_score: 0.9647
Epoch 71/100
50/50 [==============================] - 2s 45ms/step - loss: 0.0865 - accuracy: 0.9704 - precision_6: 0.9716 - recall_6: 0.9690 - f2_score: 0.9695 - val_loss: 0.0798 - val_accuracy: 0.9744 - val_precision_6: 0.9758 - val_recall_6: 0.9744 - val_f2_score: 0.9747
Epoch 72/100
50/50 [==============================] - 3s 55ms/step - loss: 0.0843 - accuracy: 0.9709 - precision_6: 0.9728 - recall_6: 0.9695 - f2_score: 0.9701 - val_loss: 0.1701 - val_accuracy: 0.9531 - val_precision_6: 0.9544 - val_recall_6: 0.9516 - val_f2_score: 0.9522
Epoch 73/100
50/50 [==============================] - 2s 43ms/step - loss: 0.1014 - accuracy: 0.9655 - precision_6: 0.9671 - recall_6: 0.9633 - f2_score: 0.9641 - val_loss: 0.1243 - val_accuracy: 0.9644 - val_precision_6: 0.9658 - val_recall_6: 0.9644 - val_f2_score: 0.9647
Epoch 74/100
50/50 [==============================] - 2s 46ms/step - loss: 0.0843 - accuracy: 0.9715 - precision_6: 0.9736 - recall_6: 0.9703 - f2_score: 0.9709 - val_loss: 0.1191 - val_accuracy: 0.9630 - val_precision_6: 0.9657 - val_recall_6: 0.9616 - val_f2_score: 0.9624
Epoch 75/100
50/50 [==============================] - 2s 45ms/step - loss: 0.0847 - accuracy: 0.9728 - precision_6: 0.9740 - recall_6: 0.9703 - f2_score: 0.9710 - val_loss: 0.0990 - val_accuracy: 0.9744 - val_precision_6: 0.9744 - val_recall_6: 0.9744 - val_f2_score: 0.9744
Epoch 76/100
50/50 [==============================] - 2s 43ms/step - loss: 0.0739 - accuracy: 0.9761 - precision_6: 0.9775 - recall_6: 0.9750 - f2_score: 0.9755 - val_loss: 0.0797 - val_accuracy: 0.9673 - val_precision_6: 0.9672 - val_recall_6: 0.9659 - val_f2_score: 0.9661
Epoch 77/100
50/50 [==============================] - 2s 41ms/step - loss: 0.0797 - accuracy: 0.9720 - precision_6: 0.9726 - recall_6: 0.9706 - f2_score: 0.9710 - val_loss: 0.1439 - val_accuracy: 0.9559 - val_precision_6: 0.9586 - val_recall_6: 0.9559 - val_f2_score: 0.9564
Epoch 78/100
50/50 [==============================] - 2s 47ms/step - loss: 0.0806 - accuracy: 0.9737 - precision_6: 0.9748 - recall_6: 0.9717 - f2_score: 0.9723 - val_loss: 0.1059 - val_accuracy: 0.9630 - val_precision_6: 0.9630 - val_recall_6: 0.9630 - val_f2_score: 0.9630
Epoch 79/100
50/50 [==============================] - 2s 41ms/step - loss: 0.0811 - accuracy: 0.9725 - precision_6: 0.9741 - recall_6: 0.9709 - f2_score: 0.9715 - val_loss: 0.0899 - val_accuracy: 0.9772 - val_precision_6: 0.9786 - val_recall_6: 0.9744 - val_f2_score: 0.9752
Epoch 80/100
50/50 [==============================] - 2s 44ms/step - loss: 0.0977 - accuracy: 0.9685 - precision_6: 0.9700 - recall_6: 0.9665 - f2_score: 0.9672 - val_loss: 0.0780 - val_accuracy: 0.9758 - val_precision_6: 0.9758 - val_recall_6: 0.9758 - val_f2_score: 0.9758
Epoch 81/100
50/50 [==============================] - 2s 48ms/step - loss: 0.0741 - accuracy: 0.9734 - precision_6: 0.9751 - recall_6: 0.9718 - f2_score: 0.9725 - val_loss: 0.0952 - val_accuracy: 0.9687 - val_precision_6: 0.9687 - val_recall_6: 0.9687 - val_f2_score: 0.9687
Epoch 82/100
50/50 [==============================] - 2s 45ms/step - loss: 0.0823 - accuracy: 0.9714 - precision_6: 0.9736 - recall_6: 0.9696 - f2_score: 0.9704 - val_loss: 0.0560 - val_accuracy: 0.9801 - val_precision_6: 0.9829 - val_recall_6: 0.9801 - val_f2_score: 0.9806
Epoch 83/100
50/50 [==============================] - 2s 40ms/step - loss: 0.0754 - accuracy: 0.9734 - precision_6: 0.9752 - recall_6: 0.9717 - f2_score: 0.9724 - val_loss: 0.1216 - val_accuracy: 0.9502 - val_precision_6: 0.9502 - val_recall_6: 0.9502 - val_f2_score: 0.9502
Epoch 84/100
50/50 [==============================] - 2s 49ms/step - loss: 0.0903 - accuracy: 0.9696 - precision_6: 0.9720 - recall_6: 0.9684 - f2_score: 0.9691 - val_loss: 0.1366 - val_accuracy: 0.9573 - val_precision_6: 0.9573 - val_recall_6: 0.9559 - val_f2_score: 0.9562
Epoch 85/100
50/50 [==============================] - 2s 43ms/step - loss: 0.0696 - accuracy: 0.9759 - precision_6: 0.9776 - recall_6: 0.9750 - f2_score: 0.9755 - val_loss: 0.0805 - val_accuracy: 0.9701 - val_precision_6: 0.9715 - val_recall_6: 0.9687 - val_f2_score: 0.9693
Epoch 86/100
50/50 [==============================] - 2s 44ms/step - loss: 0.0746 - accuracy: 0.9755 - precision_6: 0.9761 - recall_6: 0.9748 - f2_score: 0.9751 - val_loss: 0.0682 - val_accuracy: 0.9787 - val_precision_6: 0.9801 - val_recall_6: 0.9787 - val_f2_score: 0.9789
Epoch 87/100
50/50 [==============================] - 2s 48ms/step - loss: 0.0678 - accuracy: 0.9752 - precision_6: 0.9771 - recall_6: 0.9741 - f2_score: 0.9747 - val_loss: 0.0787 - val_accuracy: 0.9730 - val_precision_6: 0.9730 - val_recall_6: 0.9730 - val_f2_score: 0.9730
Epoch 88/100
50/50 [==============================] - 2s 43ms/step - loss: 0.0844 - accuracy: 0.9714 - precision_6: 0.9732 - recall_6: 0.9698 - f2_score: 0.9705 - val_loss: 0.1053 - val_accuracy: 0.9616 - val_precision_6: 0.9615 - val_recall_6: 0.9602 - val_f2_score: 0.9604
Epoch 89/100
50/50 [==============================] - 2s 40ms/step - loss: 0.0796 - accuracy: 0.9739 - precision_6: 0.9754 - recall_6: 0.9729 - f2_score: 0.9734 - val_loss: 0.0991 - val_accuracy: 0.9744 - val_precision_6: 0.9757 - val_recall_6: 0.9730 - val_f2_score: 0.9735
Epoch 90/100
50/50 [==============================] - 3s 51ms/step - loss: 0.0701 - accuracy: 0.9775 - precision_6: 0.9794 - recall_6: 0.9763 - f2_score: 0.9769 - val_loss: 0.0834 - val_accuracy: 0.9730 - val_precision_6: 0.9730 - val_recall_6: 0.9730 - val_f2_score: 0.9730
Epoch 91/100
50/50 [==============================] - 2s 40ms/step - loss: 0.0662 - accuracy: 0.9766 - precision_6: 0.9775 - recall_6: 0.9758 - f2_score: 0.9761 - val_loss: 0.0956 - val_accuracy: 0.9716 - val_precision_6: 0.9716 - val_recall_6: 0.9716 - val_f2_score: 0.9716
Epoch 92/100
50/50 [==============================] - 2s 45ms/step - loss: 0.0622 - accuracy: 0.9802 - precision_6: 0.9808 - recall_6: 0.9786 - f2_score: 0.9791 - val_loss: 0.0984 - val_accuracy: 0.9687 - val_precision_6: 0.9687 - val_recall_6: 0.9687 - val_f2_score: 0.9687
Epoch 93/100
50/50 [==============================] - 2s 49ms/step - loss: 0.0667 - accuracy: 0.9756 - precision_6: 0.9765 - recall_6: 0.9739 - f2_score: 0.9744 - val_loss: 0.0712 - val_accuracy: 0.9787 - val_precision_6: 0.9786 - val_recall_6: 0.9772 - val_f2_score: 0.9775
Epoch 94/100
50/50 [==============================] - 2s 42ms/step - loss: 0.0740 - accuracy: 0.9737 - precision_6: 0.9746 - recall_6: 0.9729 - f2_score: 0.9733 - val_loss: 0.0810 - val_accuracy: 0.9758 - val_precision_6: 0.9786 - val_recall_6: 0.9758 - val_f2_score: 0.9764
Epoch 95/100
50/50 [==============================] - 2s 40ms/step - loss: 0.0727 - accuracy: 0.9737 - precision_6: 0.9755 - recall_6: 0.9720 - f2_score: 0.9727 - val_loss: 0.0716 - val_accuracy: 0.9815 - val_precision_6: 0.9815 - val_recall_6: 0.9801 - val_f2_score: 0.9804
Epoch 96/100
50/50 [==============================] - 3s 53ms/step - loss: 0.0743 - accuracy: 0.9742 - precision_6: 0.9756 - recall_6: 0.9733 - f2_score: 0.9737 - val_loss: 0.0990 - val_accuracy: 0.9744 - val_precision_6: 0.9758 - val_recall_6: 0.9744 - val_f2_score: 0.9747
Epoch 97/100
50/50 [==============================] - 2s 43ms/step - loss: 0.0678 - accuracy: 0.9772 - precision_6: 0.9789 - recall_6: 0.9761 - f2_score: 0.9767 - val_loss: 0.0886 - val_accuracy: 0.9687 - val_precision_6: 0.9701 - val_recall_6: 0.9687 - val_f2_score: 0.9690
Epoch 98/100
50/50 [==============================] - 2s 44ms/step - loss: 0.0679 - accuracy: 0.9771 - precision_6: 0.9787 - recall_6: 0.9761 - f2_score: 0.9766 - val_loss: 0.0906 - val_accuracy: 0.9701 - val_precision_6: 0.9715 - val_recall_6: 0.9701 - val_f2_score: 0.9704
Epoch 99/100
50/50 [==============================] - 2s 48ms/step - loss: 0.0679 - accuracy: 0.9785 - precision_6: 0.9802 - recall_6: 0.9782 - f2_score: 0.9786 - val_loss: 0.0766 - val_accuracy: 0.9730 - val_precision_6: 0.9730 - val_recall_6: 0.9730 - val_f2_score: 0.9730
Epoch 100/100
50/50 [==============================] - 2s 41ms/step - loss: 0.0593 - accuracy: 0.9797 - precision_6: 0.9805 - recall_6: 0.9788 - f2_score: 0.9791 - val_loss: 0.0906 - val_accuracy: 0.9758 - val_precision_6: 0.9772 - val_recall_6: 0.9744 - val_f2_score: 0.9750
In [ ]:
# Plot the training F2 score
plt.figure(figsize = (10, 4))
plt.subplot(1, 2, 1)
plt.plot(H1B.history['f2_score'], label = 'training')

# Plot the Validation F2 Score
plt.ylabel('Accuracy %')
plt.title('Training')
plt.plot(H1B.history['val_f2_score'], label = 'validation')
plt.title('F2 Score')
plt.legend()

# Plot the Training Loss
plt.subplot(1, 2, 2)
plt.plot(H1B.history['loss'], label = 'training')
plt.ylabel('Training Loss')
plt.xlabel('epochs')

# Plot the Val Loss
plt.plot(H1B.history['val_loss'], label = 'validation')
plt.xlabel('epochs')
plt.title('Loss')
plt.legend()
plt.show()

# After training, predict classes on the test set
y_pred = Conv1B.predict(X_test)
y_pred_classes = np.argmax(y_pred, axis=1)
y_true_classes = np.argmax(y_test_encoded, axis=1)

# Generate the confusion matrix
cm = confusion_matrix(y_true_classes, y_pred_classes)

# Plotting the confusion matrix
fig, ax = plt.subplots(figsize=(8, 8))
ax.matshow(cm, cmap=plt.cm.Blues, alpha=0.3)
for i in range(cm.shape[0]):
    for j in range(cm.shape[1]):
        ax.text(x=j, y=i, s=cm[i, j], va='center', ha='center')

plt.xlabel('Predicted labels')
plt.ylabel('True labels')
plt.title('Confusion Matrix')
plt.show()

# Plot ROC and calculate AUC (CNN)
plot_roc_curve(y_test, y_pred, NUM_CLASSES, 'Conv1B')

# Print the scores
print(f"Validation Scores:\n\tF2 Score: {H1B.history['val_f2_score']}\n\tRecall: {H1B.history['val_recall_6']}")
print(f"\tPrecision: {H1B.history['val_precision_6']}\n\tAccuracy: {H1B.history['val_accuracy']}")
No description has been provided for this image
22/22 [==============================] - 0s 5ms/step
No description has been provided for this image
No description has been provided for this image
Validation Scores:
	F2 Score: [0.49376946687698364, 0.6547447443008423, 0.6855791807174683, 0.7498552203178406, 0.7934908866882324, 0.8053788542747498, 0.8187050819396973, 0.8079356551170349, 0.7767831087112427, 0.839804470539093, 0.840275764465332, 0.805944561958313, 0.8910183310508728, 0.8648338913917542, 0.8603384494781494, 0.8784940242767334, 0.893112301826477, 0.8911740183830261, 0.9001426100730896, 0.8936716914176941, 0.9160958528518677, 0.9310934543609619, 0.9221779108047485, 0.9309951066970825, 0.9169993996620178, 0.9316238760948181, 0.9369651675224304, 0.9586895108222961, 0.9326869249343872, 0.9258202910423279, 0.9407069683074951, 0.9344728589057922, 0.9524487257003784, 0.9567198157310486, 0.9598404765129089, 0.9595671892166138, 0.9578707218170166, 0.9419465065002441, 0.9512959122657776, 0.9418637752532959, 0.9461385607719421, 0.9546878933906555, 0.9663626551628113, 0.9612646102905273, 0.9661354422569275, 0.9647125601768494, 0.9678336977958679, 0.9661354422569275, 0.9683850407600403, 0.9732497930526733, 0.9664105772972107, 0.9541441798210144, 0.9626886248588562, 0.9575377702713013, 0.9635639786720276, 0.9609909057617188, 0.9647125601768494, 0.9678336977958679, 0.9609909057617188, 0.9692569971084595, 0.9476380348205566, 0.9561753869056702, 0.9521775841712952, 0.9718270897865295, 0.9647125601768494, 0.9704041481018066, 0.9575981497764587, 0.9704041481018066, 0.9567198157310486, 0.9647125601768494, 0.974672794342041, 0.9521775841712952, 0.9647125601768494, 0.9624145030975342, 0.9743954539299011, 0.9661354422569275, 0.956447422504425, 0.9630156755447388, 0.9752278327941895, 0.9758179187774658, 0.9687055349349976, 0.9806434512138367, 0.9502134323120117, 0.9561753869056702, 0.9692569971084595, 0.9789413213729858, 0.9729730486869812, 0.9604438543319702, 0.9735268354415894, 0.9729730486869812, 0.9715505838394165, 0.9687055349349976, 0.9775184988975525, 0.9763734936714172, 0.980364203453064, 0.974672794342041, 0.9689812064170837, 0.9704041481018066, 0.9729730486869812, 0.9749503135681152]
	Recall: [0.4509246051311493, 0.645803689956665, 0.6600284576416016, 0.7368420958518982, 0.7837837934494019, 0.7923186421394348, 0.8093883395195007, 0.7994310259819031, 0.7652916312217712, 0.8307254910469055, 0.8321479558944702, 0.8022759556770325, 0.8862019777297974, 0.8591749668121338, 0.8534850478172302, 0.8762446641921997, 0.8890469670295715, 0.8876244425773621, 0.8975818157196045, 0.8918918967247009, 0.9132290482521057, 0.9302987456321716, 0.9203413724899292, 0.9288762211799622, 0.9146515130996704, 0.9302987456321716, 0.9345661401748657, 0.9573257565498352, 0.9302987456321716, 0.9231863617897034, 0.9388335943222046, 0.933143675327301, 0.9516358375549316, 0.9559032917022705, 0.9587482213973999, 0.9587482213973999, 0.9573257565498352, 0.941678524017334, 0.9502133727073669, 0.9402560591697693, 0.9445234537124634, 0.9530583024024963, 0.9644381403923035, 0.9601706862449646, 0.9658606052398682, 0.9644381403923035, 0.9672830700874329, 0.9658606052398682, 0.9672830700874329, 0.9729729890823364, 0.9658606052398682, 0.9530583024024963, 0.9615931510925293, 0.9559032917022705, 0.9630156755447388, 0.9601706862449646, 0.9644381403923035, 0.9672830700874329, 0.9601706862449646, 0.9687055349349976, 0.9473684430122375, 0.9559032917022705, 0.9516358375549316, 0.9715505242347717, 0.9644381403923035, 0.9701279997825623, 0.9573257565498352, 0.9701279997825623, 0.9559032917022705, 0.9644381403923035, 0.9743954539299011, 0.9516358375549316, 0.9644381403923035, 0.9615931510925293, 0.9743954539299011, 0.9658606052398682, 0.9559032917022705, 0.9630156755447388, 0.9743954539299011, 0.9758179187774658, 0.9687055349349976, 0.9800853729248047, 0.9502133727073669, 0.9559032917022705, 0.9687055349349976, 0.9786628484725952, 0.9729729890823364, 0.9601706862449646, 0.9729729890823364, 0.9729729890823364, 0.9715505242347717, 0.9687055349349976, 0.9772403836250305, 0.9758179187774658, 0.9800853729248047, 0.9743954539299011, 0.9687055349349976, 0.9701279997825623, 0.9729729890823364, 0.9743954539299011]
	Precision: [0.7964823842048645, 0.6931297779083252, 0.811188817024231, 0.8068535923957825, 0.8348484635353088, 0.8622291088104248, 0.8582202196121216, 0.8438438177108765, 0.8264209032058716, 0.8781954646110535, 0.8744394779205322, 0.8209607005119324, 0.9108186960220337, 0.8882352709770203, 0.8888888955116272, 0.8876080513000488, 0.9097525477409363, 0.9056603908538818, 0.9105339050292969, 0.9008620977401733, 0.9277456402778625, 0.9342857003211975, 0.9295976758003235, 0.9395683407783508, 0.9265129566192627, 0.9369627237319946, 0.9466858506202698, 0.9641833901405334, 0.9423630833625793, 0.9365079402923584, 0.9482758641242981, 0.939828097820282, 0.9557142853736877, 0.9599999785423279, 0.9642346501350403, 0.9628571271896362, 0.9600570797920227, 0.9430199265480042, 0.9556509256362915, 0.9483500719070435, 0.952654242515564, 0.9612625241279602, 0.9741379022598267, 0.9656652212142944, 0.9672364592552185, 0.9658119678497314, 0.9700428247451782, 0.9672364592552185, 0.9728183150291443, 0.9743589758872986, 0.968616247177124, 0.9585121870040894, 0.9670958518981934, 0.9641320109367371, 0.9657632112503052, 0.9642857313156128, 0.9658119678497314, 0.9700428247451782, 0.9642857313156128, 0.9714693427085876, 0.9487179517745972, 0.9572649598121643, 0.9543509483337402, 0.9729344844818115, 0.9658119678497314, 0.9715099930763245, 0.9586894512176514, 0.9715099930763245, 0.9599999785423279, 0.9658119678497314, 0.9757834672927856, 0.9543509483337402, 0.9658119678497314, 0.9657142758369446, 0.9743954539299011, 0.9672364592552185, 0.9586305022239685, 0.9630156755447388, 0.9785714149475098, 0.9758179187774658, 0.9687055349349976, 0.9828816056251526, 0.9502133727073669, 0.9572649598121643, 0.9714693427085876, 0.9800570011138916, 0.9729729890823364, 0.9615384340286255, 0.9757489562034607, 0.9729729890823364, 0.9715505242347717, 0.9687055349349976, 0.9786324501037598, 0.9786019921302795, 0.9814814925193787, 0.9757834672927856, 0.9700854420661926, 0.9715099930763245, 0.9729729890823364, 0.9771754741668701]
	Accuracy: [0.6614509224891663, 0.6742532253265381, 0.7467994093894958, 0.7837837934494019, 0.8093883395195007, 0.8421052694320679, 0.837837815284729, 0.8207681179046631, 0.7823613286018372, 0.8605974316596985, 0.8648648858070374, 0.8136557340621948, 0.8933143615722656, 0.8762446641921997, 0.874822199344635, 0.8790895938873291, 0.9018492102622986, 0.8990042805671692, 0.9061166644096375, 0.8990042805671692, 0.9174964427947998, 0.933143675327301, 0.9231863617897034, 0.9345661401748657, 0.9189189076423645, 0.9345661401748657, 0.9431009888648987, 0.9601706862449646, 0.9359886050224304, 0.9260312914848328, 0.9473684430122375, 0.9345661401748657, 0.9530583024024963, 0.9559032917022705, 0.9601706862449646, 0.9601706862449646, 0.9587482213973999, 0.9431009888648987, 0.9530583024024963, 0.941678524017334, 0.9487909078598022, 0.9587482213973999, 0.9701279997825623, 0.9615931510925293, 0.9672830700874329, 0.9644381403923035, 0.9687055349349976, 0.9672830700874329, 0.9701279997825623, 0.9729729890823364, 0.9687055349349976, 0.9559032917022705, 0.9644381403923035, 0.9601706862449646, 0.9644381403923035, 0.9615931510925293, 0.9658606052398682, 0.9687055349349976, 0.9615931510925293, 0.9701279997825623, 0.9473684430122375, 0.9559032917022705, 0.9530583024024963, 0.9715505242347717, 0.9658606052398682, 0.9701279997825623, 0.9587482213973999, 0.9715505242347717, 0.9587482213973999, 0.9658606052398682, 0.9743954539299011, 0.9530583024024963, 0.9644381403923035, 0.9630156755447388, 0.9743954539299011, 0.9672830700874329, 0.9559032917022705, 0.9630156755447388, 0.9772403836250305, 0.9758179187774658, 0.9687055349349976, 0.9800853729248047, 0.9502133727073669, 0.9573257565498352, 0.9701279997825623, 0.9786628484725952, 0.9729729890823364, 0.9615931510925293, 0.9743954539299011, 0.9729729890823364, 0.9715505242347717, 0.9687055349349976, 0.9786628484725952, 0.9758179187774658, 0.9815078377723694, 0.9743954539299011, 0.9687055349349976, 0.9701279997825623, 0.9729729890823364, 0.9758179187774658]
  • F2 Score: 0.9750
  • Recall: 0.9743
  • Precision: 0.9772
  • Accuracy: 0.9758

The second confusion matrix and the associated metrics indicate an even higher performance of the model in classifying brain tumors compared to the first one.

  • Glioma (0): The model correctly identified Glioma 154 times, and there are no instances where Glioma is incorrectly predicted as 'No Tumor'. This is an improvement compared to the previous model, which had 4 instances of false negatives for Glioma.

  • Meningioma (1): It correctly identified Meningioma 157 times, with only 1 instance incorrectly predicted as 'No Tumor'. Compared to the previous model, which had 5 false negatives for Meningioma, this represents a significant improvement.

  • No Tumor (2): The model perfectly classified all 200 instances with no false negatives, which is consistent with the previous model's performance for this class.

  • Pituitary (3): The model identified Pituitary tumors correctly 175 times, with only 1 case misclassified as 'No Tumor'. This is an improvement compared to the previous model, which had 2 false negatives for Pituitary.

The F2 score for this model is 0.9750, which, given that the F2 score gives more weight to recall than precision, indicates that the model is very effective at minimizing false negatives. It is an improvement over the previous model's F2 score of 0.9350, indicating better overall performance with respect to the balance between recall and precision. The recall rate of 0.9743 is a slight improvement over the previous recall rate of 0.9331, and the precision rate of 0.9772 is also an improvement over the previous precision rate of 0.9425.

The accuracy of this model is 0.9758, which is a small but notable improvement from the previous model's accuracy of 0.9388.

Number of Filters (f): The number of filters in a convolutional layer determines the number of unique feature detectors that will be applied to the input data. Increasing the number of filters can allow the network to capture a wider array of features from the input data. This can potentially lead to better model performance, as the network can learn more complex and subtle patterns in the data. However, it also increases the number of trainable parameters, which can lead to a risk of overfitting if not managed properly (e.g., with sufficient training data or regularization).

Kernel Regularization Constant (lam): Regularization is used to prevent overfitting by penalizing large weights in the model. A smaller lambda (λ) means less regularization, allowing the weights to grow larger if needed to minimize the loss function. This can lead to a more flexible model that fits the training data more closely. Conversely, a larger λ imposes stronger penalties on large weights, leading to a simpler model that may generalize better but potentially at the cost of not fitting the training data as closely.

Batch Size: While the batch size does not directly affect the model's capacity to learn, it influences the training dynamics. Larger batch sizes offer more stable gradients from batch to batch, which can lead to more stable convergence. However, smaller batch sizes often provide a regularizing effect and can escape local minima due to the noise they introduce in the gradient estimation. Finding the right batch size is often a balance between computational efficiency and the quality of the convergence.

When comparing Conv1A and Conv1B, the increased number of filters in Conv1B allows the model to potentially learn more complex features at each layer. The reduced regularization term in Conv1B (a smaller lam value) suggests that the weights are allowed to grow larger, giving the model more flexibility during training. This could mean that Conv1B is more prone to overfitting than Conv1A; however, if the validation accuracy has also improved, it indicates that the model is still generalizing well to unseen data.

In [ ]:
# Parameters
f = 32          # No. Filters
l = 6           # No. Layers
k = 5           # Kernel Size (k x k)
lam = 0.0001    # Kernel Regularization Constant (L2)

# Initialize Sequential Network
Conv2A = Sequential()

# Add Augmentations Directly
# Horizontal Flip, 10% Rotation, 10% Move, Brightness / Contrast Adjust 
Conv2A.add( RandomFlip("horizontal") )
Conv2A.add( RandomRotation(0.1) )
Conv2A.add( RandomTranslation(height_factor = 0.1, width_factor = 0.1) )
Conv2A.add( RandomBrightness(factor = 0.1, value_range = (0.0, 1.0)) )
Conv2A.add( RandomContrast(0.1) ) 

# Add Multiple Layers (Changeable)
for i in range(l):
  
    # Add Convolutional Layer, Follow With Pooling
    # Note: Loosely Following Unet Architecture
    Conv2A.add(Conv2D(filters = (f * 2 ** i),
                    input_shape = (IMG_SIZE, IMG_SIZE, 1),
                    kernel_size = (k, k), 
                    kernel_regularizer = l2(lam),
                    kernel_initializer = 'he_uniform',
                    padding = 'same', 
                    activation = 'relu',
                    data_format = 'channels_last'))
    Conv2A.add(MaxPooling2D(pool_size = (2, 2), data_format = 'channels_last'))

# Flatten After Convolutional Layers
Conv2A.add(Flatten())

# Same Final Dense Layer
Conv2A.add(Dense(NUM_CLASSES, activation = 'softmax', 
                kernel_initializer = 'glorot_uniform',
                kernel_regularizer = l2(lam)
                ))
In [ ]:
# Train With CC, Adam
Conv2A.compile(loss = 'categorical_crossentropy',
               optimizer = 'adam',
               metrics = ['accuracy', Precision(), Recall(), F2Score()])

# Build Model With Basic Parameters (Build For Grayscale Images)
Conv2A.build((None, IMG_SIZE, IMG_SIZE, 1))
Conv2A.summary()

# Fit Model (High Patience For Full Convergence)
H2A = Conv2A.fit(X_train, y_train_encoded, 
          batch_size = 128,
          epochs = 100, 
          verbose = 1,
          validation_data = (X_test, y_test_encoded))
Model: "sequential_3"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 random_flip_3 (RandomFlip)  (None, 256, 256, 1)       0         
                                                                 
 random_rotation_3 (RandomR  (None, 256, 256, 1)       0         
 otation)                                                        
                                                                 
 random_translation_3 (Rand  (None, 256, 256, 1)       0         
 omTranslation)                                                  
                                                                 
 random_brightness_3 (Rando  (None, 256, 256, 1)       0         
 mBrightness)                                                    
                                                                 
 random_contrast_3 (RandomC  (None, 256, 256, 1)       0         
 ontrast)                                                        
                                                                 
 conv2d_12 (Conv2D)          (None, 256, 256, 32)      832       
                                                                 
 max_pooling2d_12 (MaxPooli  (None, 128, 128, 32)      0         
 ng2D)                                                           
                                                                 
 conv2d_13 (Conv2D)          (None, 128, 128, 64)      51264     
                                                                 
 max_pooling2d_13 (MaxPooli  (None, 64, 64, 64)        0         
 ng2D)                                                           
                                                                 
 conv2d_14 (Conv2D)          (None, 64, 64, 128)       204928    
                                                                 
 max_pooling2d_14 (MaxPooli  (None, 32, 32, 128)       0         
 ng2D)                                                           
                                                                 
 conv2d_15 (Conv2D)          (None, 32, 32, 256)       819456    
                                                                 
 max_pooling2d_15 (MaxPooli  (None, 16, 16, 256)       0         
 ng2D)                                                           
                                                                 
 conv2d_16 (Conv2D)          (None, 16, 16, 512)       3277312   
                                                                 
 max_pooling2d_16 (MaxPooli  (None, 8, 8, 512)         0         
 ng2D)                                                           
                                                                 
 conv2d_17 (Conv2D)          (None, 8, 8, 1024)        13108224  
                                                                 
 max_pooling2d_17 (MaxPooli  (None, 4, 4, 1024)        0         
 ng2D)                                                           
                                                                 
 flatten_3 (Flatten)         (None, 16384)             0         
                                                                 
 dense_3 (Dense)             (None, 4)                 65540     
                                                                 
=================================================================
Total params: 17527556 (66.86 MB)
Trainable params: 17527556 (66.86 MB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
Epoch 1/100
50/50 [==============================] - 15s 145ms/step - loss: 5.4878 - accuracy: 0.3473 - precision_8: 0.4745 - recall_8: 0.1046 - f2_score: 0.1239 - val_loss: 1.6410 - val_accuracy: 0.4253 - val_precision_8: 0.5429 - val_recall_8: 0.1081 - val_f2_score: 0.1287
Epoch 2/100
50/50 [==============================] - 5s 106ms/step - loss: 1.4645 - accuracy: 0.5320 - precision_8: 0.6482 - recall_8: 0.2764 - f2_score: 0.3122 - val_loss: 1.2296 - val_accuracy: 0.6188 - val_precision_8: 0.6982 - val_recall_8: 0.4509 - val_f2_score: 0.4853
Epoch 3/100
50/50 [==============================] - 4s 82ms/step - loss: 1.1865 - accuracy: 0.6647 - precision_8: 0.7487 - recall_8: 0.5214 - f2_score: 0.5551 - val_loss: 1.0844 - val_accuracy: 0.6842 - val_precision_8: 0.7213 - val_recall_8: 0.6444 - val_f2_score: 0.6584
Epoch 4/100
50/50 [==============================] - 4s 78ms/step - loss: 1.0421 - accuracy: 0.7229 - precision_8: 0.7784 - recall_8: 0.6402 - f2_score: 0.6638 - val_loss: 0.9372 - val_accuracy: 0.7568 - val_precision_8: 0.7895 - val_recall_8: 0.7041 - val_f2_score: 0.7197
Epoch 5/100
50/50 [==============================] - 4s 71ms/step - loss: 0.9600 - accuracy: 0.7489 - precision_8: 0.7927 - recall_8: 0.6902 - f2_score: 0.7085 - val_loss: 0.8226 - val_accuracy: 0.8051 - val_precision_8: 0.8318 - val_recall_8: 0.7525 - val_f2_score: 0.7671
Epoch 6/100
50/50 [==============================] - 4s 87ms/step - loss: 0.8830 - accuracy: 0.7745 - precision_8: 0.8055 - recall_8: 0.7286 - f2_score: 0.7428 - val_loss: 0.8147 - val_accuracy: 0.7866 - val_precision_8: 0.8057 - val_recall_8: 0.7610 - val_f2_score: 0.7696
Epoch 7/100
50/50 [==============================] - 4s 85ms/step - loss: 0.8132 - accuracy: 0.8055 - precision_8: 0.8348 - recall_8: 0.7699 - f2_score: 0.7821 - val_loss: 0.7264 - val_accuracy: 0.8222 - val_precision_8: 0.8469 - val_recall_8: 0.7866 - val_f2_score: 0.7980
Epoch 8/100
50/50 [==============================] - 4s 80ms/step - loss: 0.7674 - accuracy: 0.8127 - precision_8: 0.8372 - recall_8: 0.7869 - f2_score: 0.7964 - val_loss: 0.7345 - val_accuracy: 0.8279 - val_precision_8: 0.8376 - val_recall_8: 0.7994 - val_f2_score: 0.8068
Epoch 9/100
50/50 [==============================] - 4s 80ms/step - loss: 0.7024 - accuracy: 0.8408 - precision_8: 0.8589 - recall_8: 0.8166 - f2_score: 0.8247 - val_loss: 0.6643 - val_accuracy: 0.8450 - val_precision_8: 0.8645 - val_recall_8: 0.8350 - val_f2_score: 0.8407
Epoch 10/100
50/50 [==============================] - 4s 76ms/step - loss: 0.6574 - accuracy: 0.8555 - precision_8: 0.8747 - recall_8: 0.8348 - f2_score: 0.8425 - val_loss: 0.5774 - val_accuracy: 0.8748 - val_precision_8: 0.8865 - val_recall_8: 0.8663 - val_f2_score: 0.8702
Epoch 11/100
50/50 [==============================] - 4s 82ms/step - loss: 0.6571 - accuracy: 0.8532 - precision_8: 0.8685 - recall_8: 0.8307 - f2_score: 0.8380 - val_loss: 0.6313 - val_accuracy: 0.8663 - val_precision_8: 0.8809 - val_recall_8: 0.8521 - val_f2_score: 0.8577
Epoch 12/100
50/50 [==============================] - 4s 89ms/step - loss: 0.6056 - accuracy: 0.8685 - precision_8: 0.8826 - recall_8: 0.8497 - f2_score: 0.8561 - val_loss: 0.5516 - val_accuracy: 0.8848 - val_precision_8: 0.8977 - val_recall_8: 0.8734 - val_f2_score: 0.8781
Epoch 13/100
50/50 [==============================] - 4s 89ms/step - loss: 0.5737 - accuracy: 0.8788 - precision_8: 0.8904 - recall_8: 0.8653 - f2_score: 0.8703 - val_loss: 0.8454 - val_accuracy: 0.8037 - val_precision_8: 0.8225 - val_recall_8: 0.7909 - val_f2_score: 0.7970
Epoch 14/100
50/50 [==============================] - 4s 84ms/step - loss: 0.5361 - accuracy: 0.8949 - precision_8: 0.9063 - recall_8: 0.8826 - f2_score: 0.8872 - val_loss: 0.4427 - val_accuracy: 0.9346 - val_precision_8: 0.9450 - val_recall_8: 0.9289 - val_f2_score: 0.9321
Epoch 15/100
50/50 [==============================] - 4s 73ms/step - loss: 0.4835 - accuracy: 0.9114 - precision_8: 0.9222 - recall_8: 0.9003 - f2_score: 0.9046 - val_loss: 0.4430 - val_accuracy: 0.9175 - val_precision_8: 0.9247 - val_recall_8: 0.9090 - val_f2_score: 0.9121
Epoch 16/100
50/50 [==============================] - 4s 79ms/step - loss: 0.4890 - accuracy: 0.9070 - precision_8: 0.9160 - recall_8: 0.8984 - f2_score: 0.9019 - val_loss: 0.4577 - val_accuracy: 0.9189 - val_precision_8: 0.9275 - val_recall_8: 0.9104 - val_f2_score: 0.9138
Epoch 17/100
50/50 [==============================] - 4s 90ms/step - loss: 0.4529 - accuracy: 0.9214 - precision_8: 0.9284 - recall_8: 0.9116 - f2_score: 0.9149 - val_loss: 0.4939 - val_accuracy: 0.9132 - val_precision_8: 0.9169 - val_recall_8: 0.9104 - val_f2_score: 0.9117
Epoch 18/100
50/50 [==============================] - 4s 75ms/step - loss: 0.4345 - accuracy: 0.9247 - precision_8: 0.9318 - recall_8: 0.9171 - f2_score: 0.9200 - val_loss: 0.4192 - val_accuracy: 0.9346 - val_precision_8: 0.9397 - val_recall_8: 0.9303 - val_f2_score: 0.9322
Epoch 19/100
50/50 [==============================] - 4s 81ms/step - loss: 0.4360 - accuracy: 0.9247 - precision_8: 0.9318 - recall_8: 0.9184 - f2_score: 0.9210 - val_loss: 0.4291 - val_accuracy: 0.9175 - val_precision_8: 0.9250 - val_recall_8: 0.9118 - val_f2_score: 0.9144
Epoch 20/100
50/50 [==============================] - 4s 71ms/step - loss: 0.4298 - accuracy: 0.9239 - precision_8: 0.9317 - recall_8: 0.9157 - f2_score: 0.9188 - val_loss: 0.3679 - val_accuracy: 0.9303 - val_precision_8: 0.9353 - val_recall_8: 0.9260 - val_f2_score: 0.9279
Epoch 21/100
50/50 [==============================] - 4s 73ms/step - loss: 0.3823 - accuracy: 0.9403 - precision_8: 0.9451 - recall_8: 0.9324 - f2_score: 0.9350 - val_loss: 0.3448 - val_accuracy: 0.9445 - val_precision_8: 0.9497 - val_recall_8: 0.9403 - val_f2_score: 0.9421
Epoch 22/100
50/50 [==============================] - 3s 68ms/step - loss: 0.3885 - accuracy: 0.9359 - precision_8: 0.9405 - recall_8: 0.9310 - f2_score: 0.9329 - val_loss: 0.4178 - val_accuracy: 0.9275 - val_precision_8: 0.9322 - val_recall_8: 0.9189 - val_f2_score: 0.9215
Epoch 23/100
50/50 [==============================] - 4s 74ms/step - loss: 0.3691 - accuracy: 0.9391 - precision_8: 0.9442 - recall_8: 0.9351 - f2_score: 0.9369 - val_loss: 0.3096 - val_accuracy: 0.9602 - val_precision_8: 0.9628 - val_recall_8: 0.9559 - val_f2_score: 0.9573
Epoch 24/100
50/50 [==============================] - 4s 74ms/step - loss: 0.3448 - accuracy: 0.9454 - precision_8: 0.9519 - recall_8: 0.9416 - f2_score: 0.9436 - val_loss: 0.4885 - val_accuracy: 0.9047 - val_precision_8: 0.9117 - val_recall_8: 0.8962 - val_f2_score: 0.8992
Epoch 25/100
50/50 [==============================] - 4s 82ms/step - loss: 0.3583 - accuracy: 0.9422 - precision_8: 0.9473 - recall_8: 0.9384 - f2_score: 0.9402 - val_loss: 0.3013 - val_accuracy: 0.9616 - val_precision_8: 0.9656 - val_recall_8: 0.9587 - val_f2_score: 0.9601
Epoch 26/100
50/50 [==============================] - 4s 79ms/step - loss: 0.3393 - accuracy: 0.9483 - precision_8: 0.9518 - recall_8: 0.9438 - f2_score: 0.9454 - val_loss: 0.4396 - val_accuracy: 0.9047 - val_precision_8: 0.9120 - val_recall_8: 0.8990 - val_f2_score: 0.9016
Epoch 27/100
50/50 [==============================] - 4s 75ms/step - loss: 0.3249 - accuracy: 0.9546 - precision_8: 0.9594 - recall_8: 0.9506 - f2_score: 0.9524 - val_loss: 0.2932 - val_accuracy: 0.9559 - val_precision_8: 0.9599 - val_recall_8: 0.9531 - val_f2_score: 0.9544
Epoch 28/100
50/50 [==============================] - 4s 74ms/step - loss: 0.3129 - accuracy: 0.9540 - precision_8: 0.9573 - recall_8: 0.9500 - f2_score: 0.9514 - val_loss: 0.3517 - val_accuracy: 0.9346 - val_precision_8: 0.9410 - val_recall_8: 0.9303 - val_f2_score: 0.9324
Epoch 29/100
50/50 [==============================] - 4s 74ms/step - loss: 0.3221 - accuracy: 0.9514 - precision_8: 0.9559 - recall_8: 0.9462 - f2_score: 0.9481 - val_loss: 0.3151 - val_accuracy: 0.9502 - val_precision_8: 0.9557 - val_recall_8: 0.9502 - val_f2_score: 0.9513
Epoch 30/100
50/50 [==============================] - 4s 75ms/step - loss: 0.3043 - accuracy: 0.9585 - precision_8: 0.9614 - recall_8: 0.9549 - f2_score: 0.9562 - val_loss: 0.2911 - val_accuracy: 0.9587 - val_precision_8: 0.9628 - val_recall_8: 0.9559 - val_f2_score: 0.9573
Epoch 31/100
50/50 [==============================] - 4s 72ms/step - loss: 0.2931 - accuracy: 0.9619 - precision_8: 0.9650 - recall_8: 0.9589 - f2_score: 0.9601 - val_loss: 0.2785 - val_accuracy: 0.9630 - val_precision_8: 0.9670 - val_recall_8: 0.9587 - val_f2_score: 0.9604
Epoch 32/100
50/50 [==============================] - 4s 81ms/step - loss: 0.2800 - accuracy: 0.9616 - precision_8: 0.9642 - recall_8: 0.9585 - f2_score: 0.9597 - val_loss: 0.2920 - val_accuracy: 0.9587 - val_precision_8: 0.9601 - val_recall_8: 0.9573 - val_f2_score: 0.9579
Epoch 33/100
50/50 [==============================] - 4s 88ms/step - loss: 0.2946 - accuracy: 0.9611 - precision_8: 0.9650 - recall_8: 0.9587 - f2_score: 0.9599 - val_loss: 0.2917 - val_accuracy: 0.9616 - val_precision_8: 0.9628 - val_recall_8: 0.9573 - val_f2_score: 0.9584
Epoch 34/100
50/50 [==============================] - 3s 67ms/step - loss: 0.2845 - accuracy: 0.9628 - precision_8: 0.9653 - recall_8: 0.9601 - f2_score: 0.9612 - val_loss: 0.2496 - val_accuracy: 0.9701 - val_precision_8: 0.9715 - val_recall_8: 0.9701 - val_f2_score: 0.9704
Epoch 35/100
50/50 [==============================] - 4s 78ms/step - loss: 0.2763 - accuracy: 0.9630 - precision_8: 0.9661 - recall_8: 0.9595 - f2_score: 0.9608 - val_loss: 0.3513 - val_accuracy: 0.9431 - val_precision_8: 0.9441 - val_recall_8: 0.9374 - val_f2_score: 0.9387
Epoch 36/100
50/50 [==============================] - 4s 78ms/step - loss: 0.2637 - accuracy: 0.9665 - precision_8: 0.9681 - recall_8: 0.9642 - f2_score: 0.9650 - val_loss: 0.2620 - val_accuracy: 0.9730 - val_precision_8: 0.9743 - val_recall_8: 0.9716 - val_f2_score: 0.9721
Epoch 37/100
50/50 [==============================] - 4s 80ms/step - loss: 0.2706 - accuracy: 0.9650 - precision_8: 0.9665 - recall_8: 0.9622 - f2_score: 0.9630 - val_loss: 0.3413 - val_accuracy: 0.9388 - val_precision_8: 0.9426 - val_recall_8: 0.9346 - val_f2_score: 0.9362
Epoch 38/100
50/50 [==============================] - 4s 79ms/step - loss: 0.2674 - accuracy: 0.9641 - precision_8: 0.9669 - recall_8: 0.9622 - f2_score: 0.9631 - val_loss: 0.2662 - val_accuracy: 0.9587 - val_precision_8: 0.9614 - val_recall_8: 0.9573 - val_f2_score: 0.9581
Epoch 39/100
50/50 [==============================] - 3s 68ms/step - loss: 0.2573 - accuracy: 0.9672 - precision_8: 0.9701 - recall_8: 0.9653 - f2_score: 0.9663 - val_loss: 0.2867 - val_accuracy: 0.9531 - val_precision_8: 0.9530 - val_recall_8: 0.9516 - val_f2_score: 0.9519
Epoch 40/100
50/50 [==============================] - 5s 92ms/step - loss: 0.2487 - accuracy: 0.9693 - precision_8: 0.9712 - recall_8: 0.9671 - f2_score: 0.9679 - val_loss: 0.2840 - val_accuracy: 0.9545 - val_precision_8: 0.9585 - val_recall_8: 0.9531 - val_f2_score: 0.9541
Epoch 41/100
50/50 [==============================] - 3s 69ms/step - loss: 0.2560 - accuracy: 0.9646 - precision_8: 0.9663 - recall_8: 0.9620 - f2_score: 0.9629 - val_loss: 0.2406 - val_accuracy: 0.9644 - val_precision_8: 0.9644 - val_recall_8: 0.9630 - val_f2_score: 0.9633
Epoch 42/100
50/50 [==============================] - 4s 76ms/step - loss: 0.2475 - accuracy: 0.9680 - precision_8: 0.9698 - recall_8: 0.9658 - f2_score: 0.9666 - val_loss: 0.2589 - val_accuracy: 0.9673 - val_precision_8: 0.9700 - val_recall_8: 0.9659 - val_f2_score: 0.9667
Epoch 43/100
50/50 [==============================] - 3s 69ms/step - loss: 0.2429 - accuracy: 0.9701 - precision_8: 0.9720 - recall_8: 0.9682 - f2_score: 0.9690 - val_loss: 0.3823 - val_accuracy: 0.9360 - val_precision_8: 0.9372 - val_recall_8: 0.9346 - val_f2_score: 0.9351
Epoch 44/100
50/50 [==============================] - 4s 76ms/step - loss: 0.2540 - accuracy: 0.9665 - precision_8: 0.9691 - recall_8: 0.9636 - f2_score: 0.9647 - val_loss: 0.2383 - val_accuracy: 0.9730 - val_precision_8: 0.9729 - val_recall_8: 0.9716 - val_f2_score: 0.9718
Epoch 45/100
50/50 [==============================] - 4s 72ms/step - loss: 0.2438 - accuracy: 0.9677 - precision_8: 0.9701 - recall_8: 0.9653 - f2_score: 0.9663 - val_loss: 0.2387 - val_accuracy: 0.9687 - val_precision_8: 0.9700 - val_recall_8: 0.9673 - val_f2_score: 0.9678
Epoch 46/100
50/50 [==============================] - 4s 73ms/step - loss: 0.2313 - accuracy: 0.9734 - precision_8: 0.9757 - recall_8: 0.9718 - f2_score: 0.9726 - val_loss: 0.2461 - val_accuracy: 0.9701 - val_precision_8: 0.9757 - val_recall_8: 0.9701 - val_f2_score: 0.9712
Epoch 47/100
50/50 [==============================] - 3s 67ms/step - loss: 0.2304 - accuracy: 0.9745 - precision_8: 0.9757 - recall_8: 0.9731 - f2_score: 0.9736 - val_loss: 0.2372 - val_accuracy: 0.9687 - val_precision_8: 0.9714 - val_recall_8: 0.9673 - val_f2_score: 0.9681
Epoch 48/100
50/50 [==============================] - 4s 73ms/step - loss: 0.2322 - accuracy: 0.9742 - precision_8: 0.9749 - recall_8: 0.9726 - f2_score: 0.9731 - val_loss: 0.2461 - val_accuracy: 0.9673 - val_precision_8: 0.9713 - val_recall_8: 0.9644 - val_f2_score: 0.9658
Epoch 49/100
50/50 [==============================] - 3s 66ms/step - loss: 0.2160 - accuracy: 0.9759 - precision_8: 0.9784 - recall_8: 0.9737 - f2_score: 0.9747 - val_loss: 0.2973 - val_accuracy: 0.9573 - val_precision_8: 0.9587 - val_recall_8: 0.9573 - val_f2_score: 0.9576
Epoch 50/100
50/50 [==============================] - 4s 71ms/step - loss: 0.2352 - accuracy: 0.9703 - precision_8: 0.9722 - recall_8: 0.9680 - f2_score: 0.9689 - val_loss: 0.2442 - val_accuracy: 0.9659 - val_precision_8: 0.9671 - val_recall_8: 0.9630 - val_f2_score: 0.9638
Epoch 51/100
50/50 [==============================] - 3s 66ms/step - loss: 0.2135 - accuracy: 0.9777 - precision_8: 0.9794 - recall_8: 0.9759 - f2_score: 0.9766 - val_loss: 0.2422 - val_accuracy: 0.9659 - val_precision_8: 0.9672 - val_recall_8: 0.9644 - val_f2_score: 0.9650
Epoch 52/100
50/50 [==============================] - 4s 70ms/step - loss: 0.2050 - accuracy: 0.9778 - precision_8: 0.9795 - recall_8: 0.9774 - f2_score: 0.9778 - val_loss: 0.2171 - val_accuracy: 0.9687 - val_precision_8: 0.9700 - val_recall_8: 0.9673 - val_f2_score: 0.9678
Epoch 53/100
50/50 [==============================] - 3s 68ms/step - loss: 0.2107 - accuracy: 0.9769 - precision_8: 0.9787 - recall_8: 0.9755 - f2_score: 0.9761 - val_loss: 0.2330 - val_accuracy: 0.9644 - val_precision_8: 0.9658 - val_recall_8: 0.9630 - val_f2_score: 0.9636
Epoch 54/100
50/50 [==============================] - 4s 74ms/step - loss: 0.1923 - accuracy: 0.9850 - precision_8: 0.9864 - recall_8: 0.9835 - f2_score: 0.9841 - val_loss: 0.2406 - val_accuracy: 0.9730 - val_precision_8: 0.9742 - val_recall_8: 0.9673 - val_f2_score: 0.9687
Epoch 55/100
50/50 [==============================] - 4s 88ms/step - loss: 0.2158 - accuracy: 0.9777 - precision_8: 0.9786 - recall_8: 0.9755 - f2_score: 0.9761 - val_loss: 0.1897 - val_accuracy: 0.9858 - val_precision_8: 0.9872 - val_recall_8: 0.9844 - val_f2_score: 0.9849
Epoch 56/100
50/50 [==============================] - 3s 68ms/step - loss: 0.2025 - accuracy: 0.9804 - precision_8: 0.9813 - recall_8: 0.9790 - f2_score: 0.9794 - val_loss: 0.2147 - val_accuracy: 0.9787 - val_precision_8: 0.9801 - val_recall_8: 0.9787 - val_f2_score: 0.9789
Epoch 57/100
50/50 [==============================] - 4s 84ms/step - loss: 0.2168 - accuracy: 0.9748 - precision_8: 0.9759 - recall_8: 0.9728 - f2_score: 0.9734 - val_loss: 0.2342 - val_accuracy: 0.9616 - val_precision_8: 0.9615 - val_recall_8: 0.9602 - val_f2_score: 0.9604
Epoch 58/100
50/50 [==============================] - 4s 70ms/step - loss: 0.2124 - accuracy: 0.9731 - precision_8: 0.9743 - recall_8: 0.9712 - f2_score: 0.9718 - val_loss: 0.2269 - val_accuracy: 0.9716 - val_precision_8: 0.9729 - val_recall_8: 0.9716 - val_f2_score: 0.9718
Epoch 59/100
50/50 [==============================] - 4s 74ms/step - loss: 0.1983 - accuracy: 0.9801 - precision_8: 0.9804 - recall_8: 0.9791 - f2_score: 0.9794 - val_loss: 0.2242 - val_accuracy: 0.9659 - val_precision_8: 0.9659 - val_recall_8: 0.9659 - val_f2_score: 0.9659
Epoch 60/100
50/50 [==============================] - 3s 68ms/step - loss: 0.1965 - accuracy: 0.9777 - precision_8: 0.9791 - recall_8: 0.9764 - f2_score: 0.9769 - val_loss: 0.1953 - val_accuracy: 0.9829 - val_precision_8: 0.9843 - val_recall_8: 0.9801 - val_f2_score: 0.9809
Epoch 61/100
50/50 [==============================] - 3s 69ms/step - loss: 0.1858 - accuracy: 0.9824 - precision_8: 0.9848 - recall_8: 0.9816 - f2_score: 0.9823 - val_loss: 0.2603 - val_accuracy: 0.9587 - val_precision_8: 0.9614 - val_recall_8: 0.9559 - val_f2_score: 0.9570
Epoch 62/100
50/50 [==============================] - 3s 69ms/step - loss: 0.1883 - accuracy: 0.9812 - precision_8: 0.9832 - recall_8: 0.9801 - f2_score: 0.9807 - val_loss: 0.1980 - val_accuracy: 0.9772 - val_precision_8: 0.9786 - val_recall_8: 0.9772 - val_f2_score: 0.9775
Epoch 63/100
50/50 [==============================] - 4s 71ms/step - loss: 0.1911 - accuracy: 0.9793 - precision_8: 0.9808 - recall_8: 0.9783 - f2_score: 0.9788 - val_loss: 0.2268 - val_accuracy: 0.9730 - val_precision_8: 0.9743 - val_recall_8: 0.9716 - val_f2_score: 0.9721
Epoch 64/100
50/50 [==============================] - 3s 67ms/step - loss: 0.2027 - accuracy: 0.9764 - precision_8: 0.9782 - recall_8: 0.9744 - f2_score: 0.9751 - val_loss: 0.2642 - val_accuracy: 0.9545 - val_precision_8: 0.9558 - val_recall_8: 0.9531 - val_f2_score: 0.9536
Epoch 65/100
50/50 [==============================] - 4s 75ms/step - loss: 0.1878 - accuracy: 0.9821 - precision_8: 0.9827 - recall_8: 0.9810 - f2_score: 0.9814 - val_loss: 0.1823 - val_accuracy: 0.9844 - val_precision_8: 0.9872 - val_recall_8: 0.9844 - val_f2_score: 0.9849
Epoch 66/100
50/50 [==============================] - 3s 67ms/step - loss: 0.1763 - accuracy: 0.9845 - precision_8: 0.9851 - recall_8: 0.9835 - f2_score: 0.9839 - val_loss: 0.2022 - val_accuracy: 0.9716 - val_precision_8: 0.9714 - val_recall_8: 0.9673 - val_f2_score: 0.9681
Epoch 67/100
50/50 [==============================] - 4s 72ms/step - loss: 0.1848 - accuracy: 0.9797 - precision_8: 0.9810 - recall_8: 0.9785 - f2_score: 0.9790 - val_loss: 0.2156 - val_accuracy: 0.9630 - val_precision_8: 0.9656 - val_recall_8: 0.9587 - val_f2_score: 0.9601
Epoch 68/100
50/50 [==============================] - 3s 67ms/step - loss: 0.1704 - accuracy: 0.9851 - precision_8: 0.9856 - recall_8: 0.9845 - f2_score: 0.9847 - val_loss: 0.1955 - val_accuracy: 0.9730 - val_precision_8: 0.9744 - val_recall_8: 0.9730 - val_f2_score: 0.9732
Epoch 69/100
50/50 [==============================] - 4s 71ms/step - loss: 0.1753 - accuracy: 0.9839 - precision_8: 0.9848 - recall_8: 0.9832 - f2_score: 0.9835 - val_loss: 0.2133 - val_accuracy: 0.9716 - val_precision_8: 0.9729 - val_recall_8: 0.9716 - val_f2_score: 0.9718
Epoch 70/100
50/50 [==============================] - 4s 85ms/step - loss: 0.1717 - accuracy: 0.9864 - precision_8: 0.9870 - recall_8: 0.9856 - f2_score: 0.9859 - val_loss: 0.2169 - val_accuracy: 0.9644 - val_precision_8: 0.9658 - val_recall_8: 0.9630 - val_f2_score: 0.9636
Epoch 71/100
50/50 [==============================] - 4s 70ms/step - loss: 0.1854 - accuracy: 0.9786 - precision_8: 0.9794 - recall_8: 0.9772 - f2_score: 0.9776 - val_loss: 0.2258 - val_accuracy: 0.9630 - val_precision_8: 0.9630 - val_recall_8: 0.9630 - val_f2_score: 0.9630
Epoch 72/100
50/50 [==============================] - 4s 79ms/step - loss: 0.1889 - accuracy: 0.9778 - precision_8: 0.9791 - recall_8: 0.9767 - f2_score: 0.9772 - val_loss: 0.1896 - val_accuracy: 0.9758 - val_precision_8: 0.9772 - val_recall_8: 0.9758 - val_f2_score: 0.9761
Epoch 73/100
50/50 [==============================] - 3s 70ms/step - loss: 0.1830 - accuracy: 0.9785 - precision_8: 0.9794 - recall_8: 0.9769 - f2_score: 0.9774 - val_loss: 0.1856 - val_accuracy: 0.9772 - val_precision_8: 0.9800 - val_recall_8: 0.9758 - val_f2_score: 0.9767
Epoch 74/100
50/50 [==============================] - 4s 74ms/step - loss: 0.1805 - accuracy: 0.9816 - precision_8: 0.9829 - recall_8: 0.9801 - f2_score: 0.9806 - val_loss: 0.2388 - val_accuracy: 0.9659 - val_precision_8: 0.9685 - val_recall_8: 0.9630 - val_f2_score: 0.9641
Epoch 75/100
50/50 [==============================] - 3s 70ms/step - loss: 0.2074 - accuracy: 0.9701 - precision_8: 0.9711 - recall_8: 0.9677 - f2_score: 0.9684 - val_loss: 0.1903 - val_accuracy: 0.9787 - val_precision_8: 0.9814 - val_recall_8: 0.9772 - val_f2_score: 0.9781
Epoch 76/100
50/50 [==============================] - 4s 72ms/step - loss: 0.1811 - accuracy: 0.9804 - precision_8: 0.9808 - recall_8: 0.9794 - f2_score: 0.9797 - val_loss: 0.2133 - val_accuracy: 0.9744 - val_precision_8: 0.9757 - val_recall_8: 0.9716 - val_f2_score: 0.9724
Epoch 77/100
50/50 [==============================] - 3s 69ms/step - loss: 0.1831 - accuracy: 0.9810 - precision_8: 0.9816 - recall_8: 0.9804 - f2_score: 0.9806 - val_loss: 0.1822 - val_accuracy: 0.9758 - val_precision_8: 0.9786 - val_recall_8: 0.9758 - val_f2_score: 0.9764
Epoch 78/100
50/50 [==============================] - 4s 71ms/step - loss: 0.1682 - accuracy: 0.9851 - precision_8: 0.9857 - recall_8: 0.9839 - f2_score: 0.9842 - val_loss: 0.1591 - val_accuracy: 0.9858 - val_precision_8: 0.9858 - val_recall_8: 0.9858 - val_f2_score: 0.9858
Epoch 79/100
50/50 [==============================] - 4s 73ms/step - loss: 0.1769 - accuracy: 0.9816 - precision_8: 0.9822 - recall_8: 0.9799 - f2_score: 0.9804 - val_loss: 0.1791 - val_accuracy: 0.9815 - val_precision_8: 0.9843 - val_recall_8: 0.9801 - val_f2_score: 0.9809
Epoch 80/100
50/50 [==============================] - 3s 69ms/step - loss: 0.1840 - accuracy: 0.9788 - precision_8: 0.9806 - recall_8: 0.9775 - f2_score: 0.9782 - val_loss: 0.1692 - val_accuracy: 0.9844 - val_precision_8: 0.9857 - val_recall_8: 0.9815 - val_f2_score: 0.9823
Epoch 81/100
50/50 [==============================] - 3s 67ms/step - loss: 0.1641 - accuracy: 0.9850 - precision_8: 0.9859 - recall_8: 0.9839 - f2_score: 0.9843 - val_loss: 0.2087 - val_accuracy: 0.9744 - val_precision_8: 0.9758 - val_recall_8: 0.9744 - val_f2_score: 0.9747
Epoch 82/100
50/50 [==============================] - 3s 69ms/step - loss: 0.1648 - accuracy: 0.9842 - precision_8: 0.9854 - recall_8: 0.9834 - f2_score: 0.9838 - val_loss: 0.1771 - val_accuracy: 0.9772 - val_precision_8: 0.9814 - val_recall_8: 0.9758 - val_f2_score: 0.9769
Epoch 83/100
50/50 [==============================] - 3s 67ms/step - loss: 0.1504 - accuracy: 0.9891 - precision_8: 0.9895 - recall_8: 0.9884 - f2_score: 0.9887 - val_loss: 0.1740 - val_accuracy: 0.9844 - val_precision_8: 0.9843 - val_recall_8: 0.9829 - val_f2_score: 0.9832
Epoch 84/100
50/50 [==============================] - 3s 70ms/step - loss: 0.1563 - accuracy: 0.9859 - precision_8: 0.9864 - recall_8: 0.9853 - f2_score: 0.9855 - val_loss: 0.1752 - val_accuracy: 0.9772 - val_precision_8: 0.9786 - val_recall_8: 0.9772 - val_f2_score: 0.9775
Epoch 85/100
50/50 [==============================] - 3s 67ms/step - loss: 0.1657 - accuracy: 0.9809 - precision_8: 0.9821 - recall_8: 0.9804 - f2_score: 0.9807 - val_loss: 0.1666 - val_accuracy: 0.9858 - val_precision_8: 0.9858 - val_recall_8: 0.9858 - val_f2_score: 0.9858
Epoch 86/100
50/50 [==============================] - 4s 72ms/step - loss: 0.1544 - accuracy: 0.9870 - precision_8: 0.9875 - recall_8: 0.9866 - f2_score: 0.9867 - val_loss: 0.1568 - val_accuracy: 0.9858 - val_precision_8: 0.9858 - val_recall_8: 0.9844 - val_f2_score: 0.9846
Epoch 87/100
50/50 [==============================] - 4s 80ms/step - loss: 0.1457 - accuracy: 0.9888 - precision_8: 0.9892 - recall_8: 0.9884 - f2_score: 0.9886 - val_loss: 0.1475 - val_accuracy: 0.9872 - val_precision_8: 0.9872 - val_recall_8: 0.9872 - val_f2_score: 0.9872
Epoch 88/100
50/50 [==============================] - 4s 73ms/step - loss: 0.1542 - accuracy: 0.9858 - precision_8: 0.9862 - recall_8: 0.9858 - f2_score: 0.9859 - val_loss: 0.1803 - val_accuracy: 0.9815 - val_precision_8: 0.9815 - val_recall_8: 0.9815 - val_f2_score: 0.9815
Epoch 89/100
50/50 [==============================] - 4s 82ms/step - loss: 0.1557 - accuracy: 0.9840 - precision_8: 0.9848 - recall_8: 0.9835 - f2_score: 0.9838 - val_loss: 0.1782 - val_accuracy: 0.9687 - val_precision_8: 0.9701 - val_recall_8: 0.9687 - val_f2_score: 0.9690
Epoch 90/100
50/50 [==============================] - 4s 70ms/step - loss: 0.1511 - accuracy: 0.9864 - precision_8: 0.9869 - recall_8: 0.9859 - f2_score: 0.9861 - val_loss: 0.1666 - val_accuracy: 0.9772 - val_precision_8: 0.9772 - val_recall_8: 0.9772 - val_f2_score: 0.9772
Epoch 91/100
50/50 [==============================] - 4s 73ms/step - loss: 0.1469 - accuracy: 0.9877 - precision_8: 0.9881 - recall_8: 0.9870 - f2_score: 0.9872 - val_loss: 0.1481 - val_accuracy: 0.9858 - val_precision_8: 0.9858 - val_recall_8: 0.9858 - val_f2_score: 0.9858
Epoch 92/100
50/50 [==============================] - 3s 70ms/step - loss: 0.1427 - accuracy: 0.9880 - precision_8: 0.9884 - recall_8: 0.9870 - f2_score: 0.9873 - val_loss: 0.1601 - val_accuracy: 0.9801 - val_precision_8: 0.9801 - val_recall_8: 0.9801 - val_f2_score: 0.9801
Epoch 93/100
50/50 [==============================] - 3s 70ms/step - loss: 0.1428 - accuracy: 0.9878 - precision_8: 0.9884 - recall_8: 0.9873 - f2_score: 0.9876 - val_loss: 0.1780 - val_accuracy: 0.9829 - val_precision_8: 0.9843 - val_recall_8: 0.9829 - val_f2_score: 0.9832
Epoch 94/100
50/50 [==============================] - 3s 67ms/step - loss: 0.1670 - accuracy: 0.9801 - precision_8: 0.9813 - recall_8: 0.9793 - f2_score: 0.9797 - val_loss: 0.2246 - val_accuracy: 0.9602 - val_precision_8: 0.9602 - val_recall_8: 0.9602 - val_f2_score: 0.9602
Epoch 95/100
50/50 [==============================] - 3s 70ms/step - loss: 0.1536 - accuracy: 0.9835 - precision_8: 0.9845 - recall_8: 0.9826 - f2_score: 0.9830 - val_loss: 0.1702 - val_accuracy: 0.9815 - val_precision_8: 0.9815 - val_recall_8: 0.9801 - val_f2_score: 0.9804
Epoch 96/100
50/50 [==============================] - 4s 74ms/step - loss: 0.1658 - accuracy: 0.9818 - precision_8: 0.9829 - recall_8: 0.9799 - f2_score: 0.9805 - val_loss: 0.1632 - val_accuracy: 0.9787 - val_precision_8: 0.9871 - val_recall_8: 0.9772 - val_f2_score: 0.9792
Epoch 97/100
50/50 [==============================] - 4s 71ms/step - loss: 0.1555 - accuracy: 0.9847 - precision_8: 0.9859 - recall_8: 0.9840 - f2_score: 0.9844 - val_loss: 0.1925 - val_accuracy: 0.9716 - val_precision_8: 0.9715 - val_recall_8: 0.9701 - val_f2_score: 0.9704
Epoch 98/100
50/50 [==============================] - 3s 70ms/step - loss: 0.1450 - accuracy: 0.9878 - precision_8: 0.9884 - recall_8: 0.9872 - f2_score: 0.9874 - val_loss: 0.1972 - val_accuracy: 0.9744 - val_precision_8: 0.9771 - val_recall_8: 0.9730 - val_f2_score: 0.9738
Epoch 99/100
50/50 [==============================] - 3s 69ms/step - loss: 0.1390 - accuracy: 0.9875 - precision_8: 0.9883 - recall_8: 0.9873 - f2_score: 0.9875 - val_loss: 0.1672 - val_accuracy: 0.9772 - val_precision_8: 0.9772 - val_recall_8: 0.9772 - val_f2_score: 0.9772
Epoch 100/100
50/50 [==============================] - 3s 70ms/step - loss: 0.1484 - accuracy: 0.9843 - precision_8: 0.9853 - recall_8: 0.9837 - f2_score: 0.9840 - val_loss: 0.1484 - val_accuracy: 0.9858 - val_precision_8: 0.9858 - val_recall_8: 0.9858 - val_f2_score: 0.9858
In [ ]:
# Plot the training F2 Score
plt.figure(figsize = (10, 4))
plt.subplot(1, 2, 1)
plt.plot(H2A.history['f2_score'], label = 'training')

# Plot the Val F2 Score
plt.ylabel('Accuracy %')
plt.title('Training')
plt.plot(H2A.history['val_f2_score'], label = 'validation')
plt.title('F2 Score')
plt.legend()

# Plot the Loss
plt.subplot(1, 2, 2)
plt.plot(H2A.history['loss'], label = 'training')
plt.ylabel('Training Loss')
plt.xlabel('epochs')

# Plot the Validation Loss
plt.plot(H2A.history['val_loss'], label = 'validation')
plt.xlabel('epochs')
plt.title('Loss')
plt.legend()
plt.show()

# After training, predict classes on the test set
y_pred = Conv2A.predict(X_test)
y_pred_classes = np.argmax(y_pred, axis=1)
y_true_classes = np.argmax(y_test_encoded, axis=1)

# Generate the confusion matrix
cm = confusion_matrix(y_true_classes, y_pred_classes)

# Plotting the confusion matrix
fig, ax = plt.subplots(figsize=(8, 8))
ax.matshow(cm, cmap=plt.cm.Blues, alpha=0.3)
for i in range(cm.shape[0]):
    for j in range(cm.shape[1]):
        ax.text(x=j, y=i, s=cm[i, j], va='center', ha='center')

plt.xlabel('Predicted labels')
plt.ylabel('True labels')
plt.title('Confusion Matrix')
plt.show()

# Plot ROC and calculate AUC (CNN)
plot_roc_curve(y_test, y_pred, NUM_CLASSES, 'Conv2A')

# Print the scores
print(f"Validation Scores:\n\tF2 Score: {H2A.history['val_f2_score']}\n\tRecall: {H2A.history['val_recall_8']}")
print(f"\tPrecision: {H2A.history['val_precision_8']}\n\tAccuracy: {H2A.history['val_accuracy']}")
No description has been provided for this image
22/22 [==============================] - 1s 8ms/step
No description has been provided for this image
No description has been provided for this image
Validation Scores:
	F2 Score: [0.1287262886762619, 0.4853031039237976, 0.6584302186965942, 0.719685971736908, 0.7671113014221191, 0.7695627212524414, 0.7979798913002014, 0.8067758083343506, 0.8407333493232727, 0.8702486157417297, 0.8576746582984924, 0.8781465888023376, 0.79701828956604, 0.9320582151412964, 0.9120752215385437, 0.9137634634971619, 0.9116809964179993, 0.9321551322937012, 0.9144079685211182, 0.9278791546821594, 0.9421322345733643, 0.9215405583381653, 0.9572649598121643, 0.8992291688919067, 0.9601140022277832, 0.901569128036499, 0.954416036605835, 0.9324209094047546, 0.9512959122657776, 0.9572649598121643, 0.9603875279426575, 0.9578707218170166, 0.9584164023399353, 0.9704041481018066, 0.9387463927268982, 0.9721036553382874, 0.936164140701294, 0.9581435322761536, 0.951906681060791, 0.9541441798210144, 0.963289737701416, 0.9666857719421387, 0.9350981712341309, 0.9718270897865295, 0.9678336977958679, 0.9712331891059875, 0.9681093692779541, 0.9658120274543762, 0.9575981497764587, 0.9638382792472839, 0.9649872183799744, 0.9678336977958679, 0.9635639786720276, 0.9686609506607056, 0.984913170337677, 0.9789413213729858, 0.9604438543319702, 0.9718270897865295, 0.9658606648445129, 0.9809225797653198, 0.9569923877716064, 0.9775184988975525, 0.9721036553382874, 0.9536008834838867, 0.984913170337677, 0.9681093692779541, 0.9601140022277832, 0.9732497930526733, 0.9718270897865295, 0.9635639786720276, 0.9630156755447388, 0.9760956168174744, 0.9766515493392944, 0.9641128182411194, 0.9780750870704651, 0.9723803997039795, 0.9763734936714172, 0.9857752323150635, 0.9809225797653198, 0.98234623670578, 0.974672794342041, 0.9769296646118164, 0.9832099676132202, 0.9775184988975525, 0.9857752323150635, 0.9846329092979431, 0.9871976971626282, 0.9815077781677246, 0.9689812064170837, 0.9772403836250305, 0.9857752323150635, 0.9800854325294495, 0.9832099676132202, 0.9601706862449646, 0.980364203453064, 0.9791904091835022, 0.9704041481018066, 0.9738041162490845, 0.9772403836250305, 0.9857752323150635]
	Recall: [0.10810811072587967, 0.4509246051311493, 0.6443812251091003, 0.704125165939331, 0.7524893283843994, 0.7610241770744324, 0.7866287231445312, 0.7994310259819031, 0.8349928855895996, 0.866287350654602, 0.8520625829696655, 0.8733997344970703, 0.7908961772918701, 0.9288762211799622, 0.9089615941047668, 0.9103840589523315, 0.9103840589523315, 0.9302987456321716, 0.9118065237998962, 0.9260312914848328, 0.9402560591697693, 0.9189189076423645, 0.9559032917022705, 0.896159291267395, 0.9587482213973999, 0.8990042805671692, 0.9530583024024963, 0.9302987456321716, 0.9502133727073669, 0.9559032917022705, 0.9587482213973999, 0.9573257565498352, 0.9573257565498352, 0.9701279997825623, 0.9374110698699951, 0.9715505242347717, 0.9345661401748657, 0.9573257565498352, 0.9516358375549316, 0.9530583024024963, 0.9630156755447388, 0.9658606052398682, 0.9345661401748657, 0.9715505242347717, 0.9672830700874329, 0.9701279997825623, 0.9672830700874329, 0.9644381403923035, 0.9573257565498352, 0.9630156755447388, 0.9644381403923035, 0.9672830700874329, 0.9630156755447388, 0.9672830700874329, 0.9843527674674988, 0.9786628484725952, 0.9601706862449646, 0.9715505242347717, 0.9658606052398682, 0.9800853729248047, 0.9559032917022705, 0.9772403836250305, 0.9715505242347717, 0.9530583024024963, 0.9843527674674988, 0.9672830700874329, 0.9587482213973999, 0.9729729890823364, 0.9715505242347717, 0.9630156755447388, 0.9630156755447388, 0.9758179187774658, 0.9758179187774658, 0.9630156755447388, 0.9772403836250305, 0.9715505242347717, 0.9758179187774658, 0.9857752323150635, 0.9800853729248047, 0.9815078377723694, 0.9743954539299011, 0.9758179187774658, 0.9829303026199341, 0.9772403836250305, 0.9857752323150635, 0.9843527674674988, 0.9871976971626282, 0.9815078377723694, 0.9687055349349976, 0.9772403836250305, 0.9857752323150635, 0.9800853729248047, 0.9829303026199341, 0.9601706862449646, 0.9800853729248047, 0.9772403836250305, 0.9701279997825623, 0.9729729890823364, 0.9772403836250305, 0.9857752323150635]
	Precision: [0.5428571701049805, 0.6982378959655762, 0.7213375568389893, 0.7894737124443054, 0.8317610025405884, 0.8057228922843933, 0.8468606472015381, 0.8375558853149414, 0.8645066022872925, 0.8864628672599792, 0.8808823823928833, 0.8976607918739319, 0.8224852085113525, 0.9450072646141052, 0.9247467517852783, 0.9275362491607666, 0.9169054627418518, 0.9396551847457886, 0.9249639511108398, 0.9353448152542114, 0.9497126340866089, 0.9321789145469666, 0.9627507328987122, 0.9117221236228943, 0.9656160473823547, 0.9119769334793091, 0.9598853588104248, 0.9410071969032288, 0.9556509256362915, 0.9627507328987122, 0.9670014381408691, 0.9600570797920227, 0.9628040194511414, 0.9715099930763245, 0.9441260695457458, 0.9743223786354065, 0.9426112174987793, 0.9614285826683044, 0.9529914259910583, 0.9585121870040894, 0.9643874764442444, 0.9700000286102295, 0.9372325539588928, 0.9729344844818115, 0.9700428247451782, 0.9756795167922974, 0.9714285731315613, 0.9713466763496399, 0.9586894512176514, 0.9671428799629211, 0.9671897292137146, 0.9700428247451782, 0.9657632112503052, 0.9742120504379272, 0.9871612191200256, 0.9800570011138916, 0.9615384340286255, 0.9729344844818115, 0.9658606052398682, 0.9842857122421265, 0.9613733887672424, 0.9786324501037598, 0.9743223786354065, 0.9557774662971497, 0.9871612191200256, 0.9714285731315613, 0.9656160473823547, 0.9743589758872986, 0.9729344844818115, 0.9657632112503052, 0.9630156755447388, 0.9772079586982727, 0.9800000190734863, 0.9685264825820923, 0.9814285635948181, 0.9757142663002014, 0.9786019921302795, 0.9857752323150635, 0.9842857122421265, 0.9857142567634583, 0.9757834672927856, 0.9814019799232483, 0.9843304753303528, 0.9786324501037598, 0.9857752323150635, 0.9857549667358398, 0.9871976971626282, 0.9815078377723694, 0.9700854420661926, 0.9772403836250305, 0.9857752323150635, 0.9800853729248047, 0.9843304753303528, 0.9601706862449646, 0.9814814925193787, 0.9870689511299133, 0.9715099930763245, 0.977142870426178, 0.9772403836250305, 0.9857752323150635]
	Accuracy: [0.4253200590610504, 0.6187766790390015, 0.6842105388641357, 0.7567567825317383, 0.8051208853721619, 0.7866287231445312, 0.8221905827522278, 0.8278805017471313, 0.8449501991271973, 0.874822199344635, 0.866287350654602, 0.8847795128822327, 0.8036984205245972, 0.9345661401748657, 0.9174964427947998, 0.9189189076423645, 0.9132290482521057, 0.9345661401748657, 0.9174964427947998, 0.9302987456321716, 0.9445234537124634, 0.9274537563323975, 0.9601706862449646, 0.904694139957428, 0.9615931510925293, 0.904694139957428, 0.9559032917022705, 0.9345661401748657, 0.9502133727073669, 0.9587482213973999, 0.9630156755447388, 0.9587482213973999, 0.9615931510925293, 0.9701279997825623, 0.9431009888648987, 0.9729729890823364, 0.9388335943222046, 0.9587482213973999, 0.9530583024024963, 0.954480767250061, 0.9644381403923035, 0.9672830700874329, 0.9359886050224304, 0.9729729890823364, 0.9687055349349976, 0.9701279997825623, 0.9687055349349976, 0.9672830700874329, 0.9573257565498352, 0.9658606052398682, 0.9658606052398682, 0.9687055349349976, 0.9644381403923035, 0.9729729890823364, 0.9857752323150635, 0.9786628484725952, 0.9615931510925293, 0.9715505242347717, 0.9658606052398682, 0.9829303026199341, 0.9587482213973999, 0.9772403836250305, 0.9729729890823364, 0.954480767250061, 0.9843527674674988, 0.9715505242347717, 0.9630156755447388, 0.9729729890823364, 0.9715505242347717, 0.9644381403923035, 0.9630156755447388, 0.9758179187774658, 0.9772403836250305, 0.9658606052398682, 0.9786628484725952, 0.9743954539299011, 0.9758179187774658, 0.9857752323150635, 0.9815078377723694, 0.9843527674674988, 0.9743954539299011, 0.9772403836250305, 0.9843527674674988, 0.9772403836250305, 0.9857752323150635, 0.9857752323150635, 0.9871976971626282, 0.9815078377723694, 0.9687055349349976, 0.9772403836250305, 0.9857752323150635, 0.9800853729248047, 0.9829303026199341, 0.9601706862449646, 0.9815078377723694, 0.9786628484725952, 0.9715505242347717, 0.9743954539299011, 0.9772403836250305, 0.9857752323150635]
  • F2 Score: 0.9858
  • Recall: 0.9858
  • Precision: 0.9858
  • Accuracy: 0.9858

The confusion matrix for the different architecture shows exceptional performance with a high F2 score, recall, precision, and accuracy, all at 0.9858. This indicates a very balanced model with a strong performance in both identifying true positives (high recall) and minimizing false positives (high precision), which is further emphasized by the F2 score showing a bias towards recall. In the Confusion Matrix:

  • Glioma (0): The model has very few false positives, with only 4 cases where other conditions are incorrectly predicted as Glioma.

  • Meningioma (1): There is only 1 false positive, where a non-Meningioma case is misclassified as Meningioma.

  • No Tumor (2): Similarly, there is 1 case where a tumor is incorrectly predicted as 'No Tumor', indicating a rare instance of false positivity for the absence of a tumor.

  • Pituitary (3): There are no false positives, showing a perfect precision rate for this class.

The fact that the F2 score, recall, precision, and accuracy all have the same value of 0.9858 is an indication of the model's consistency across these metrics. It suggests that not only is the model good at identifying positive cases (as indicated by the high recall), but it is also very accurate in its predictions (as indicated by the high precision), without sacrificing one for the other. This balance is particularly important in medical imaging, where both missing a condition (false negative) and incorrectly diagnosing a condition (false positive) can have serious implications. However, recall is still more important nonetheless, but it is very good.

The F2 score being equal to recall and precision underscores the model's robustness. It manages to maintain this balance even when extra importance is placed on recall, which is crucial in medical diagnosis to ensure that as many true cases as possible are identified.

Conv2A Model Architecture and Parameters:

Filters (f): Conv2A starts with 32 filters in the first convolutional layer and doubles this number with each subsequent layer. This is indicative of a deep and complex network capable of learning a rich variety of features from the input data.

Layers (l): With 6 layers, Conv2A is designed to capture a hierarchy of features, from simple to complex, as the depth increases.

Kernel Size (k): The kernel size of 5x5 in Conv2A allows the model to capture more spatial information and potentially learn more complex features in the input space.

Regularization Constant (lam): The regularization constant of 0.0001 suggests a moderate level of regularization, aiming to prevent overfitting while still allowing the model to learn complex patterns.

Architecture Specifics: Conv2A uses he_uniform initialization, which is suited for layers with ReLU activation, and follows a pattern loosely based on the U-Net architecture.

Conv2B Model Architecture and Parameters:

Filters (f): Conv2B starts with a much smaller number of filters (4) and similarly doubles them with each layer. This design implies a more gradual and constrained approach to feature learning, potentially requiring less computational resources.

Layers (l): Conv2B matches Conv2A in depth with 6 layers, which means it is also structured to process features hierarchically.

Kernel Size (k): Conv2B employs the same kernel size of 5x5, which is consistent with the idea of capturing larger receptive fields in the input data.

Regularization Constant (lam): With a much smaller lambda of 0.0000001, Conv2B places a weaker constraint on the weights, possibly leading to a more flexible but also more overfit-prone model.

Dropout Regularization: Conv2B introduces a dropout layer with a rate of 0.25, which is a significant addition. Dropout helps in preventing overfitting by randomly disabling a subset of neurons, thus forcing the network to learn more robust features.

Architecture Specifics: Conv2B shares the use of he_uniform initialization and the U-Net-like architecture with Conv2A.

Impact on Model Performance:

Conv2A is designed to be a more powerful model in terms of capacity. It can learn more detailed and complex feature representations, which may lead to better performance on large and complex datasets. However, the moderate regularization and lack of dropout could make it more susceptible to overfitting, especially if the training data is not extensive enough to support such a complex model.

Conv2B takes a more conservative approach to feature learning and includes dropout to mitigate the risk of overfitting. This model may be more suitable for smaller datasets or when the computational resources are a limiting factor. It trades off some capacity for complexity in favor of potentially better generalization.

In [ ]:
# Parameters
f = 4           # No. Filters
l = 6           # No. Layers
k = 5           # Kernel Size (k x k)
lam = 0.0000001 # Kernel Regularization Constant (L2)

# Initialize Sequential Network
Conv2B = Sequential()

# Add Augmentations Directly
# Horizontal Flip, 10% Rotation, 10% Move, Brightness / Contrast Adjust 
Conv2B.add( RandomFlip("horizontal") )
Conv2B.add( RandomRotation(0.1) )
Conv2B.add( RandomTranslation(height_factor = 0.1, width_factor = 0.1) )
Conv2B.add( RandomBrightness(factor = 0.1, value_range = (0.0, 1.0)) )
Conv2B.add( RandomContrast(0.1) ) 

# Add Multiple Layers (Changeable)
for i in range(l):
  
    # Add Convolutional Layer, Follow With Pooling
    # Note: Loosely Following Unet Architecture
    Conv2B.add(Conv2D(filters = (f * 2 ** i),
                    input_shape = (IMG_SIZE, IMG_SIZE, 1),
                    kernel_size = (k, k), 
                    kernel_regularizer = l2(lam),
                    kernel_initializer = 'he_uniform',
                    padding = 'same', 
                    activation = 'relu',
                    data_format = 'channels_last'))
    Conv2B.add(MaxPooling2D(pool_size = (2, 2), data_format = 'channels_last'))

# Flatten After Convolutional Layers
Conv2B.add(Flatten())

# Added Dropout Regularization - Intepret Model Changes
Conv2B.add(Dropout(0.25))
Conv2B.add(Dense(NUM_CLASSES, activation = 'softmax', 
                kernel_initializer = 'glorot_uniform',
                kernel_regularizer = l2(lam)
                ))
In [ ]:
# Train With CC, Adam
Conv2B.compile(loss = 'categorical_crossentropy',
               optimizer = 'adam',
               metrics = ['accuracy', Precision(), Recall(), F2Score()])

# Build Model With Basic Parameters (Build For Grayscale Images)
Conv2B.build((None, IMG_SIZE, IMG_SIZE, 1))
Conv2B.summary()

# Fit Model (High Patience For Full Convergence)
H2B = Conv2B.fit(X_train, y_train_encoded, 
          batch_size = 128,
          epochs = 100, 
          verbose = 1,
          validation_data = (X_test, y_test_encoded))
Model: "sequential_4"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 random_flip_4 (RandomFlip)  (None, 256, 256, 1)       0         
                                                                 
 random_rotation_4 (RandomR  (None, 256, 256, 1)       0         
 otation)                                                        
                                                                 
 random_translation_4 (Rand  (None, 256, 256, 1)       0         
 omTranslation)                                                  
                                                                 
 random_brightness_4 (Rando  (None, 256, 256, 1)       0         
 mBrightness)                                                    
                                                                 
 random_contrast_4 (RandomC  (None, 256, 256, 1)       0         
 ontrast)                                                        
                                                                 
 conv2d_18 (Conv2D)          (None, 256, 256, 4)       104       
                                                                 
 max_pooling2d_18 (MaxPooli  (None, 128, 128, 4)       0         
 ng2D)                                                           
                                                                 
 conv2d_19 (Conv2D)          (None, 128, 128, 8)       808       
                                                                 
 max_pooling2d_19 (MaxPooli  (None, 64, 64, 8)         0         
 ng2D)                                                           
                                                                 
 conv2d_20 (Conv2D)          (None, 64, 64, 16)        3216      
                                                                 
 max_pooling2d_20 (MaxPooli  (None, 32, 32, 16)        0         
 ng2D)                                                           
                                                                 
 conv2d_21 (Conv2D)          (None, 32, 32, 32)        12832     
                                                                 
 max_pooling2d_21 (MaxPooli  (None, 16, 16, 32)        0         
 ng2D)                                                           
                                                                 
 conv2d_22 (Conv2D)          (None, 16, 16, 64)        51264     
                                                                 
 max_pooling2d_22 (MaxPooli  (None, 8, 8, 64)          0         
 ng2D)                                                           
                                                                 
 conv2d_23 (Conv2D)          (None, 8, 8, 128)         204928    
                                                                 
 max_pooling2d_23 (MaxPooli  (None, 4, 4, 128)         0         
 ng2D)                                                           
                                                                 
 flatten_4 (Flatten)         (None, 2048)              0         
                                                                 
 dropout (Dropout)           (None, 2048)              0         
                                                                 
 dense_4 (Dense)             (None, 4)                 8196      
                                                                 
=================================================================
Total params: 281348 (1.07 MB)
Trainable params: 281348 (1.07 MB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
Epoch 1/100
50/50 [==============================] - 6s 36ms/step - loss: 1.3187 - accuracy: 0.3850 - precision_10: 0.5816 - recall_10: 0.1060 - f2_score: 0.1267 - val_loss: 0.9903 - val_accuracy: 0.6003 - val_precision_10: 0.6497 - val_recall_10: 0.4353 - val_f2_score: 0.4660
Epoch 2/100
50/50 [==============================] - 1s 20ms/step - loss: 0.9124 - accuracy: 0.6274 - precision_10: 0.7239 - recall_10: 0.4326 - f2_score: 0.4705 - val_loss: 0.7113 - val_accuracy: 0.6828 - val_precision_10: 0.7611 - val_recall_10: 0.5619 - val_f2_score: 0.5929
Epoch 3/100
50/50 [==============================] - 1s 24ms/step - loss: 0.7576 - accuracy: 0.6873 - precision_10: 0.7557 - recall_10: 0.5810 - f2_score: 0.6092 - val_loss: 0.6170 - val_accuracy: 0.7312 - val_precision_10: 0.7857 - val_recall_10: 0.6572 - val_f2_score: 0.6794
Epoch 4/100
50/50 [==============================] - 2s 40ms/step - loss: 0.6783 - accuracy: 0.7290 - precision_10: 0.7762 - recall_10: 0.6426 - f2_score: 0.6655 - val_loss: 0.5652 - val_accuracy: 0.7710 - val_precision_10: 0.8220 - val_recall_10: 0.6899 - val_f2_score: 0.7128
Epoch 5/100
50/50 [==============================] - 1s 23ms/step - loss: 0.6052 - accuracy: 0.7576 - precision_10: 0.7950 - recall_10: 0.6954 - f2_score: 0.7133 - val_loss: 0.5076 - val_accuracy: 0.7923 - val_precision_10: 0.8242 - val_recall_10: 0.7468 - val_f2_score: 0.7611
Epoch 6/100
50/50 [==============================] - 1s 21ms/step - loss: 0.5551 - accuracy: 0.7796 - precision_10: 0.8137 - recall_10: 0.7332 - f2_score: 0.7480 - val_loss: 0.5070 - val_accuracy: 0.7909 - val_precision_10: 0.8157 - val_recall_10: 0.7553 - val_f2_score: 0.7667
Epoch 7/100
50/50 [==============================] - 1s 22ms/step - loss: 0.5607 - accuracy: 0.7745 - precision_10: 0.8073 - recall_10: 0.7297 - f2_score: 0.7440 - val_loss: 0.4962 - val_accuracy: 0.7767 - val_precision_10: 0.8209 - val_recall_10: 0.7496 - val_f2_score: 0.7629
Epoch 8/100
50/50 [==============================] - 2s 43ms/step - loss: 0.5216 - accuracy: 0.7894 - precision_10: 0.8159 - recall_10: 0.7503 - f2_score: 0.7626 - val_loss: 0.4428 - val_accuracy: 0.8108 - val_precision_10: 0.8271 - val_recall_10: 0.7824 - val_f2_score: 0.7909
Epoch 9/100
50/50 [==============================] - 1s 21ms/step - loss: 0.4918 - accuracy: 0.8009 - precision_10: 0.8263 - recall_10: 0.7715 - f2_score: 0.7819 - val_loss: 0.5010 - val_accuracy: 0.7795 - val_precision_10: 0.7970 - val_recall_10: 0.7596 - val_f2_score: 0.7668
Epoch 10/100
50/50 [==============================] - 1s 28ms/step - loss: 0.4836 - accuracy: 0.8131 - precision_10: 0.8331 - recall_10: 0.7741 - f2_score: 0.7852 - val_loss: 0.3988 - val_accuracy: 0.8350 - val_precision_10: 0.8565 - val_recall_10: 0.8065 - val_f2_score: 0.8161
Epoch 11/100
50/50 [==============================] - 1s 21ms/step - loss: 0.4204 - accuracy: 0.8326 - precision_10: 0.8543 - recall_10: 0.8117 - f2_score: 0.8199 - val_loss: 0.3670 - val_accuracy: 0.8393 - val_precision_10: 0.8540 - val_recall_10: 0.8236 - val_f2_score: 0.8295
Epoch 12/100
50/50 [==============================] - 1s 21ms/step - loss: 0.4156 - accuracy: 0.8332 - precision_10: 0.8534 - recall_10: 0.8139 - f2_score: 0.8215 - val_loss: 0.3597 - val_accuracy: 0.8578 - val_precision_10: 0.8733 - val_recall_10: 0.8435 - val_f2_score: 0.8493
Epoch 13/100
50/50 [==============================] - 1s 22ms/step - loss: 0.3829 - accuracy: 0.8487 - precision_10: 0.8658 - recall_10: 0.8328 - f2_score: 0.8392 - val_loss: 0.3273 - val_accuracy: 0.8691 - val_precision_10: 0.8745 - val_recall_10: 0.8620 - val_f2_score: 0.8645
Epoch 14/100
50/50 [==============================] - 2s 36ms/step - loss: 0.3756 - accuracy: 0.8514 - precision_10: 0.8653 - recall_10: 0.8347 - f2_score: 0.8406 - val_loss: 0.3587 - val_accuracy: 0.8492 - val_precision_10: 0.8607 - val_recall_10: 0.8350 - val_f2_score: 0.8400
Epoch 15/100
50/50 [==============================] - 1s 26ms/step - loss: 0.3530 - accuracy: 0.8644 - precision_10: 0.8770 - recall_10: 0.8509 - f2_score: 0.8560 - val_loss: 0.2671 - val_accuracy: 0.8890 - val_precision_10: 0.9032 - val_recall_10: 0.8762 - val_f2_score: 0.8815
Epoch 16/100
50/50 [==============================] - 1s 21ms/step - loss: 0.3180 - accuracy: 0.8835 - precision_10: 0.8954 - recall_10: 0.8696 - f2_score: 0.8747 - val_loss: 0.2601 - val_accuracy: 0.8890 - val_precision_10: 0.9037 - val_recall_10: 0.8677 - val_f2_score: 0.8747
Epoch 17/100
50/50 [==============================] - 1s 20ms/step - loss: 0.2981 - accuracy: 0.8862 - precision_10: 0.9000 - recall_10: 0.8756 - f2_score: 0.8804 - val_loss: 0.2558 - val_accuracy: 0.9004 - val_precision_10: 0.9037 - val_recall_10: 0.8947 - val_f2_score: 0.8965
Epoch 18/100
50/50 [==============================] - 1s 20ms/step - loss: 0.2780 - accuracy: 0.8953 - precision_10: 0.9036 - recall_10: 0.8850 - f2_score: 0.8886 - val_loss: 0.2504 - val_accuracy: 0.9033 - val_precision_10: 0.9109 - val_recall_10: 0.8876 - val_f2_score: 0.8922
Epoch 19/100
50/50 [==============================] - 2s 39ms/step - loss: 0.2564 - accuracy: 0.9030 - precision_10: 0.9126 - recall_10: 0.8935 - f2_score: 0.8973 - val_loss: 0.2088 - val_accuracy: 0.9189 - val_precision_10: 0.9322 - val_recall_10: 0.9189 - val_f2_score: 0.9215
Epoch 20/100
50/50 [==============================] - 1s 27ms/step - loss: 0.2403 - accuracy: 0.9119 - precision_10: 0.9180 - recall_10: 0.9033 - f2_score: 0.9062 - val_loss: 0.1866 - val_accuracy: 0.9303 - val_precision_10: 0.9376 - val_recall_10: 0.9189 - val_f2_score: 0.9226
Epoch 21/100
50/50 [==============================] - 1s 22ms/step - loss: 0.2370 - accuracy: 0.9136 - precision_10: 0.9199 - recall_10: 0.9047 - f2_score: 0.9077 - val_loss: 0.1536 - val_accuracy: 0.9360 - val_precision_10: 0.9440 - val_recall_10: 0.9360 - val_f2_score: 0.9376
Epoch 22/100
50/50 [==============================] - 1s 26ms/step - loss: 0.2368 - accuracy: 0.9184 - precision_10: 0.9236 - recall_10: 0.9111 - f2_score: 0.9136 - val_loss: 0.1971 - val_accuracy: 0.9232 - val_precision_10: 0.9283 - val_recall_10: 0.9203 - val_f2_score: 0.9219
Epoch 23/100
50/50 [==============================] - 1s 20ms/step - loss: 0.2299 - accuracy: 0.9130 - precision_10: 0.9198 - recall_10: 0.9071 - f2_score: 0.9096 - val_loss: 0.2395 - val_accuracy: 0.9175 - val_precision_10: 0.9280 - val_recall_10: 0.8990 - val_f2_score: 0.9047
Epoch 24/100
50/50 [==============================] - 2s 44ms/step - loss: 0.2070 - accuracy: 0.9252 - precision_10: 0.9312 - recall_10: 0.9184 - f2_score: 0.9209 - val_loss: 0.1421 - val_accuracy: 0.9331 - val_precision_10: 0.9397 - val_recall_10: 0.9317 - val_f2_score: 0.9333
Epoch 25/100
50/50 [==============================] - 1s 26ms/step - loss: 0.1873 - accuracy: 0.9321 - precision_10: 0.9357 - recall_10: 0.9275 - f2_score: 0.9291 - val_loss: 0.1837 - val_accuracy: 0.9317 - val_precision_10: 0.9390 - val_recall_10: 0.9189 - val_f2_score: 0.9229
Epoch 26/100
50/50 [==============================] - 1s 21ms/step - loss: 0.1858 - accuracy: 0.9328 - precision_10: 0.9393 - recall_10: 0.9275 - f2_score: 0.9299 - val_loss: 0.1609 - val_accuracy: 0.9374 - val_precision_10: 0.9428 - val_recall_10: 0.9374 - val_f2_score: 0.9385
Epoch 27/100
50/50 [==============================] - 1s 22ms/step - loss: 0.1858 - accuracy: 0.9293 - precision_10: 0.9350 - recall_10: 0.9236 - f2_score: 0.9258 - val_loss: 0.1431 - val_accuracy: 0.9445 - val_precision_10: 0.9481 - val_recall_10: 0.9346 - val_f2_score: 0.9372
Epoch 28/100
50/50 [==============================] - 1s 24ms/step - loss: 0.1852 - accuracy: 0.9297 - precision_10: 0.9354 - recall_10: 0.9253 - f2_score: 0.9273 - val_loss: 0.2116 - val_accuracy: 0.9275 - val_precision_10: 0.9286 - val_recall_10: 0.9246 - val_f2_score: 0.9254
Epoch 29/100
50/50 [==============================] - 2s 40ms/step - loss: 0.1897 - accuracy: 0.9307 - precision_10: 0.9352 - recall_10: 0.9271 - f2_score: 0.9287 - val_loss: 0.1460 - val_accuracy: 0.9474 - val_precision_10: 0.9499 - val_recall_10: 0.9445 - val_f2_score: 0.9456
Epoch 30/100
50/50 [==============================] - 1s 22ms/step - loss: 0.1669 - accuracy: 0.9413 - precision_10: 0.9462 - recall_10: 0.9359 - f2_score: 0.9380 - val_loss: 0.1642 - val_accuracy: 0.9474 - val_precision_10: 0.9485 - val_recall_10: 0.9431 - val_f2_score: 0.9442
Epoch 31/100
50/50 [==============================] - 1s 20ms/step - loss: 0.1752 - accuracy: 0.9342 - precision_10: 0.9383 - recall_10: 0.9305 - f2_score: 0.9321 - val_loss: 0.2191 - val_accuracy: 0.9061 - val_precision_10: 0.9150 - val_recall_10: 0.9033 - val_f2_score: 0.9056
Epoch 32/100
50/50 [==============================] - 1s 21ms/step - loss: 0.1847 - accuracy: 0.9323 - precision_10: 0.9370 - recall_10: 0.9266 - f2_score: 0.9286 - val_loss: 0.1122 - val_accuracy: 0.9531 - val_precision_10: 0.9558 - val_recall_10: 0.9531 - val_f2_score: 0.9536
Epoch 33/100
50/50 [==============================] - 1s 20ms/step - loss: 0.1341 - accuracy: 0.9502 - precision_10: 0.9543 - recall_10: 0.9473 - f2_score: 0.9487 - val_loss: 0.1184 - val_accuracy: 0.9502 - val_precision_10: 0.9542 - val_recall_10: 0.9488 - val_f2_score: 0.9499
Epoch 34/100
50/50 [==============================] - 1s 26ms/step - loss: 0.1344 - accuracy: 0.9502 - precision_10: 0.9534 - recall_10: 0.9478 - f2_score: 0.9489 - val_loss: 0.1226 - val_accuracy: 0.9516 - val_precision_10: 0.9557 - val_recall_10: 0.9502 - val_f2_score: 0.9513
Epoch 35/100
50/50 [==============================] - 2s 40ms/step - loss: 0.1483 - accuracy: 0.9440 - precision_10: 0.9481 - recall_10: 0.9403 - f2_score: 0.9419 - val_loss: 0.0940 - val_accuracy: 0.9687 - val_precision_10: 0.9700 - val_recall_10: 0.9644 - val_f2_score: 0.9655
Epoch 36/100
50/50 [==============================] - 2s 32ms/step - loss: 0.1385 - accuracy: 0.9468 - precision_10: 0.9511 - recall_10: 0.9426 - f2_score: 0.9443 - val_loss: 0.0948 - val_accuracy: 0.9701 - val_precision_10: 0.9742 - val_recall_10: 0.9687 - val_f2_score: 0.9698
Epoch 37/100
50/50 [==============================] - 1s 27ms/step - loss: 0.1481 - accuracy: 0.9451 - precision_10: 0.9490 - recall_10: 0.9422 - f2_score: 0.9436 - val_loss: 0.1228 - val_accuracy: 0.9559 - val_precision_10: 0.9598 - val_recall_10: 0.9516 - val_f2_score: 0.9533
Epoch 38/100
50/50 [==============================] - 2s 33ms/step - loss: 0.1141 - accuracy: 0.9603 - precision_10: 0.9637 - recall_10: 0.9574 - f2_score: 0.9587 - val_loss: 0.1165 - val_accuracy: 0.9474 - val_precision_10: 0.9511 - val_recall_10: 0.9417 - val_f2_score: 0.9436
Epoch 39/100
50/50 [==============================] - 2s 37ms/step - loss: 0.1151 - accuracy: 0.9552 - precision_10: 0.9578 - recall_10: 0.9528 - f2_score: 0.9538 - val_loss: 0.1077 - val_accuracy: 0.9616 - val_precision_10: 0.9657 - val_recall_10: 0.9602 - val_f2_score: 0.9613
Epoch 40/100
50/50 [==============================] - 2s 40ms/step - loss: 0.1310 - accuracy: 0.9502 - precision_10: 0.9533 - recall_10: 0.9468 - f2_score: 0.9481 - val_loss: 0.0812 - val_accuracy: 0.9644 - val_precision_10: 0.9671 - val_recall_10: 0.9630 - val_f2_score: 0.9638
Epoch 41/100
50/50 [==============================] - 1s 22ms/step - loss: 0.1073 - accuracy: 0.9589 - precision_10: 0.9600 - recall_10: 0.9578 - f2_score: 0.9582 - val_loss: 0.1392 - val_accuracy: 0.9417 - val_precision_10: 0.9456 - val_recall_10: 0.9403 - val_f2_score: 0.9413
Epoch 42/100
50/50 [==============================] - 1s 21ms/step - loss: 0.1165 - accuracy: 0.9576 - precision_10: 0.9610 - recall_10: 0.9544 - f2_score: 0.9557 - val_loss: 0.0970 - val_accuracy: 0.9673 - val_precision_10: 0.9728 - val_recall_10: 0.9673 - val_f2_score: 0.9684
Epoch 43/100
50/50 [==============================] - 1s 21ms/step - loss: 0.1140 - accuracy: 0.9573 - precision_10: 0.9590 - recall_10: 0.9540 - f2_score: 0.9550 - val_loss: 0.0945 - val_accuracy: 0.9573 - val_precision_10: 0.9586 - val_recall_10: 0.9545 - val_f2_score: 0.9553
Epoch 44/100
50/50 [==============================] - 2s 37ms/step - loss: 0.1016 - accuracy: 0.9628 - precision_10: 0.9646 - recall_10: 0.9616 - f2_score: 0.9622 - val_loss: 0.0674 - val_accuracy: 0.9701 - val_precision_10: 0.9729 - val_recall_10: 0.9701 - val_f2_score: 0.9707
Epoch 45/100
50/50 [==============================] - 1s 22ms/step - loss: 0.0948 - accuracy: 0.9661 - precision_10: 0.9681 - recall_10: 0.9642 - f2_score: 0.9650 - val_loss: 0.0698 - val_accuracy: 0.9730 - val_precision_10: 0.9771 - val_recall_10: 0.9730 - val_f2_score: 0.9738
Epoch 46/100
50/50 [==============================] - 1s 22ms/step - loss: 0.0990 - accuracy: 0.9630 - precision_10: 0.9643 - recall_10: 0.9612 - f2_score: 0.9618 - val_loss: 0.0751 - val_accuracy: 0.9673 - val_precision_10: 0.9713 - val_recall_10: 0.9644 - val_f2_score: 0.9658
Epoch 47/100
50/50 [==============================] - 2s 35ms/step - loss: 0.0956 - accuracy: 0.9665 - precision_10: 0.9691 - recall_10: 0.9638 - f2_score: 0.9648 - val_loss: 0.1078 - val_accuracy: 0.9587 - val_precision_10: 0.9601 - val_recall_10: 0.9587 - val_f2_score: 0.9590
Epoch 48/100
50/50 [==============================] - 1s 23ms/step - loss: 0.1152 - accuracy: 0.9597 - precision_10: 0.9622 - recall_10: 0.9576 - f2_score: 0.9585 - val_loss: 0.0853 - val_accuracy: 0.9687 - val_precision_10: 0.9728 - val_recall_10: 0.9659 - val_f2_score: 0.9672
Epoch 49/100
50/50 [==============================] - 2s 31ms/step - loss: 0.0899 - accuracy: 0.9690 - precision_10: 0.9706 - recall_10: 0.9669 - f2_score: 0.9677 - val_loss: 0.0839 - val_accuracy: 0.9630 - val_precision_10: 0.9629 - val_recall_10: 0.9602 - val_f2_score: 0.9607
Epoch 50/100
50/50 [==============================] - 1s 24ms/step - loss: 0.0837 - accuracy: 0.9717 - precision_10: 0.9738 - recall_10: 0.9698 - f2_score: 0.9706 - val_loss: 0.0646 - val_accuracy: 0.9716 - val_precision_10: 0.9716 - val_recall_10: 0.9716 - val_f2_score: 0.9716
Epoch 51/100
50/50 [==============================] - 1s 28ms/step - loss: 0.0812 - accuracy: 0.9706 - precision_10: 0.9719 - recall_10: 0.9695 - f2_score: 0.9700 - val_loss: 0.0961 - val_accuracy: 0.9673 - val_precision_10: 0.9672 - val_recall_10: 0.9659 - val_f2_score: 0.9661
Epoch 52/100
50/50 [==============================] - 1s 23ms/step - loss: 0.1005 - accuracy: 0.9633 - precision_10: 0.9643 - recall_10: 0.9619 - f2_score: 0.9624 - val_loss: 0.0906 - val_accuracy: 0.9701 - val_precision_10: 0.9700 - val_recall_10: 0.9673 - val_f2_score: 0.9678
Epoch 53/100
50/50 [==============================] - 2s 50ms/step - loss: 0.0951 - accuracy: 0.9650 - precision_10: 0.9677 - recall_10: 0.9628 - f2_score: 0.9638 - val_loss: 0.0808 - val_accuracy: 0.9673 - val_precision_10: 0.9686 - val_recall_10: 0.9659 - val_f2_score: 0.9664
Epoch 54/100
50/50 [==============================] - 1s 22ms/step - loss: 0.0758 - accuracy: 0.9736 - precision_10: 0.9746 - recall_10: 0.9715 - f2_score: 0.9721 - val_loss: 0.1230 - val_accuracy: 0.9659 - val_precision_10: 0.9672 - val_recall_10: 0.9659 - val_f2_score: 0.9661
Epoch 55/100
50/50 [==============================] - 1s 22ms/step - loss: 0.0845 - accuracy: 0.9698 - precision_10: 0.9714 - recall_10: 0.9684 - f2_score: 0.9690 - val_loss: 0.0849 - val_accuracy: 0.9644 - val_precision_10: 0.9686 - val_recall_10: 0.9644 - val_f2_score: 0.9653
Epoch 56/100
50/50 [==============================] - 1s 21ms/step - loss: 0.0718 - accuracy: 0.9733 - precision_10: 0.9746 - recall_10: 0.9725 - f2_score: 0.9729 - val_loss: 0.0623 - val_accuracy: 0.9772 - val_precision_10: 0.9786 - val_recall_10: 0.9772 - val_f2_score: 0.9775
Epoch 57/100
50/50 [==============================] - 1s 21ms/step - loss: 0.0745 - accuracy: 0.9734 - precision_10: 0.9746 - recall_10: 0.9728 - f2_score: 0.9732 - val_loss: 0.0579 - val_accuracy: 0.9858 - val_precision_10: 0.9872 - val_recall_10: 0.9858 - val_f2_score: 0.9861
Epoch 58/100
50/50 [==============================] - 1s 21ms/step - loss: 0.0867 - accuracy: 0.9696 - precision_10: 0.9713 - recall_10: 0.9679 - f2_score: 0.9686 - val_loss: 0.0647 - val_accuracy: 0.9787 - val_precision_10: 0.9801 - val_recall_10: 0.9787 - val_f2_score: 0.9789
Epoch 59/100
50/50 [==============================] - 2s 32ms/step - loss: 0.0649 - accuracy: 0.9772 - precision_10: 0.9784 - recall_10: 0.9759 - f2_score: 0.9764 - val_loss: 0.0661 - val_accuracy: 0.9744 - val_precision_10: 0.9744 - val_recall_10: 0.9744 - val_f2_score: 0.9744
Epoch 60/100
50/50 [==============================] - 1s 22ms/step - loss: 0.0958 - accuracy: 0.9620 - precision_10: 0.9647 - recall_10: 0.9611 - f2_score: 0.9618 - val_loss: 0.0664 - val_accuracy: 0.9787 - val_precision_10: 0.9800 - val_recall_10: 0.9758 - val_f2_score: 0.9767
Epoch 61/100
50/50 [==============================] - 1s 23ms/step - loss: 0.0718 - accuracy: 0.9742 - precision_10: 0.9748 - recall_10: 0.9728 - f2_score: 0.9732 - val_loss: 0.0805 - val_accuracy: 0.9673 - val_precision_10: 0.9672 - val_recall_10: 0.9644 - val_f2_score: 0.9650
Epoch 62/100
50/50 [==============================] - 1s 28ms/step - loss: 0.0656 - accuracy: 0.9755 - precision_10: 0.9762 - recall_10: 0.9745 - f2_score: 0.9749 - val_loss: 0.0757 - val_accuracy: 0.9730 - val_precision_10: 0.9729 - val_recall_10: 0.9716 - val_f2_score: 0.9718
Epoch 63/100
50/50 [==============================] - 2s 35ms/step - loss: 0.0761 - accuracy: 0.9731 - precision_10: 0.9748 - recall_10: 0.9717 - f2_score: 0.9723 - val_loss: 0.1037 - val_accuracy: 0.9602 - val_precision_10: 0.9615 - val_recall_10: 0.9602 - val_f2_score: 0.9604
Epoch 64/100
50/50 [==============================] - 2s 33ms/step - loss: 0.0621 - accuracy: 0.9786 - precision_10: 0.9794 - recall_10: 0.9780 - f2_score: 0.9783 - val_loss: 0.0814 - val_accuracy: 0.9616 - val_precision_10: 0.9630 - val_recall_10: 0.9616 - val_f2_score: 0.9619
Epoch 65/100
50/50 [==============================] - 1s 25ms/step - loss: 0.0694 - accuracy: 0.9750 - precision_10: 0.9761 - recall_10: 0.9742 - f2_score: 0.9746 - val_loss: 0.0593 - val_accuracy: 0.9787 - val_precision_10: 0.9801 - val_recall_10: 0.9787 - val_f2_score: 0.9789
Epoch 66/100
50/50 [==============================] - 1s 20ms/step - loss: 0.0759 - accuracy: 0.9734 - precision_10: 0.9749 - recall_10: 0.9728 - f2_score: 0.9732 - val_loss: 0.1274 - val_accuracy: 0.9502 - val_precision_10: 0.9516 - val_recall_10: 0.9502 - val_f2_score: 0.9505
Epoch 67/100
50/50 [==============================] - 1s 30ms/step - loss: 0.0713 - accuracy: 0.9736 - precision_10: 0.9743 - recall_10: 0.9717 - f2_score: 0.9722 - val_loss: 0.0499 - val_accuracy: 0.9815 - val_precision_10: 0.9815 - val_recall_10: 0.9815 - val_f2_score: 0.9815
Epoch 68/100
50/50 [==============================] - 2s 36ms/step - loss: 0.0683 - accuracy: 0.9755 - precision_10: 0.9767 - recall_10: 0.9742 - f2_score: 0.9747 - val_loss: 0.0887 - val_accuracy: 0.9716 - val_precision_10: 0.9714 - val_recall_10: 0.9673 - val_f2_score: 0.9681
Epoch 69/100
50/50 [==============================] - 2s 35ms/step - loss: 0.0563 - accuracy: 0.9796 - precision_10: 0.9810 - recall_10: 0.9786 - f2_score: 0.9791 - val_loss: 0.0540 - val_accuracy: 0.9801 - val_precision_10: 0.9801 - val_recall_10: 0.9801 - val_f2_score: 0.9801
Epoch 70/100
50/50 [==============================] - 1s 20ms/step - loss: 0.0612 - accuracy: 0.9788 - precision_10: 0.9794 - recall_10: 0.9783 - f2_score: 0.9785 - val_loss: 0.0591 - val_accuracy: 0.9744 - val_precision_10: 0.9744 - val_recall_10: 0.9744 - val_f2_score: 0.9744
Epoch 71/100
50/50 [==============================] - 1s 20ms/step - loss: 0.0634 - accuracy: 0.9766 - precision_10: 0.9786 - recall_10: 0.9753 - f2_score: 0.9760 - val_loss: 0.0541 - val_accuracy: 0.9815 - val_precision_10: 0.9815 - val_recall_10: 0.9815 - val_f2_score: 0.9815
Epoch 72/100
50/50 [==============================] - 1s 23ms/step - loss: 0.0662 - accuracy: 0.9761 - precision_10: 0.9775 - recall_10: 0.9758 - f2_score: 0.9761 - val_loss: 0.0561 - val_accuracy: 0.9815 - val_precision_10: 0.9815 - val_recall_10: 0.9815 - val_f2_score: 0.9815
Epoch 73/100
50/50 [==============================] - 1s 21ms/step - loss: 0.0596 - accuracy: 0.9805 - precision_10: 0.9814 - recall_10: 0.9791 - f2_score: 0.9796 - val_loss: 0.0566 - val_accuracy: 0.9758 - val_precision_10: 0.9758 - val_recall_10: 0.9758 - val_f2_score: 0.9758
Epoch 74/100
50/50 [==============================] - 2s 34ms/step - loss: 0.0541 - accuracy: 0.9824 - precision_10: 0.9827 - recall_10: 0.9813 - f2_score: 0.9816 - val_loss: 0.0704 - val_accuracy: 0.9758 - val_precision_10: 0.9758 - val_recall_10: 0.9758 - val_f2_score: 0.9758
Epoch 75/100
50/50 [==============================] - 1s 21ms/step - loss: 0.0574 - accuracy: 0.9809 - precision_10: 0.9811 - recall_10: 0.9799 - f2_score: 0.9802 - val_loss: 0.0525 - val_accuracy: 0.9872 - val_precision_10: 0.9872 - val_recall_10: 0.9858 - val_f2_score: 0.9861
Epoch 76/100
50/50 [==============================] - 1s 26ms/step - loss: 0.0656 - accuracy: 0.9772 - precision_10: 0.9786 - recall_10: 0.9771 - f2_score: 0.9774 - val_loss: 0.0548 - val_accuracy: 0.9815 - val_precision_10: 0.9815 - val_recall_10: 0.9815 - val_f2_score: 0.9815
Epoch 77/100
50/50 [==============================] - 1s 24ms/step - loss: 0.0693 - accuracy: 0.9778 - precision_10: 0.9783 - recall_10: 0.9769 - f2_score: 0.9772 - val_loss: 0.1180 - val_accuracy: 0.9587 - val_precision_10: 0.9587 - val_recall_10: 0.9587 - val_f2_score: 0.9587
Epoch 78/100
50/50 [==============================] - 2s 51ms/step - loss: 0.0621 - accuracy: 0.9810 - precision_10: 0.9815 - recall_10: 0.9804 - f2_score: 0.9806 - val_loss: 0.0562 - val_accuracy: 0.9829 - val_precision_10: 0.9829 - val_recall_10: 0.9829 - val_f2_score: 0.9829
Epoch 79/100
50/50 [==============================] - 1s 23ms/step - loss: 0.0626 - accuracy: 0.9775 - precision_10: 0.9786 - recall_10: 0.9769 - f2_score: 0.9772 - val_loss: 0.0936 - val_accuracy: 0.9716 - val_precision_10: 0.9729 - val_recall_10: 0.9716 - val_f2_score: 0.9718
Epoch 80/100
50/50 [==============================] - 2s 34ms/step - loss: 0.0616 - accuracy: 0.9775 - precision_10: 0.9783 - recall_10: 0.9766 - f2_score: 0.9769 - val_loss: 0.0780 - val_accuracy: 0.9701 - val_precision_10: 0.9701 - val_recall_10: 0.9701 - val_f2_score: 0.9701
Epoch 81/100
50/50 [==============================] - 1s 21ms/step - loss: 0.0571 - accuracy: 0.9791 - precision_10: 0.9799 - recall_10: 0.9783 - f2_score: 0.9786 - val_loss: 0.0488 - val_accuracy: 0.9801 - val_precision_10: 0.9801 - val_recall_10: 0.9801 - val_f2_score: 0.9801
Epoch 82/100
50/50 [==============================] - 1s 27ms/step - loss: 0.0605 - accuracy: 0.9783 - precision_10: 0.9792 - recall_10: 0.9774 - f2_score: 0.9777 - val_loss: 0.0944 - val_accuracy: 0.9744 - val_precision_10: 0.9744 - val_recall_10: 0.9744 - val_f2_score: 0.9744
Epoch 83/100
50/50 [==============================] - 2s 38ms/step - loss: 0.0620 - accuracy: 0.9766 - precision_10: 0.9773 - recall_10: 0.9755 - f2_score: 0.9758 - val_loss: 0.0653 - val_accuracy: 0.9801 - val_precision_10: 0.9801 - val_recall_10: 0.9787 - val_f2_score: 0.9789
Epoch 84/100
50/50 [==============================] - 2s 36ms/step - loss: 0.0575 - accuracy: 0.9797 - precision_10: 0.9811 - recall_10: 0.9791 - f2_score: 0.9795 - val_loss: 0.0926 - val_accuracy: 0.9758 - val_precision_10: 0.9758 - val_recall_10: 0.9758 - val_f2_score: 0.9758
Epoch 85/100
50/50 [==============================] - 1s 23ms/step - loss: 0.0541 - accuracy: 0.9821 - precision_10: 0.9826 - recall_10: 0.9816 - f2_score: 0.9818 - val_loss: 0.0535 - val_accuracy: 0.9829 - val_precision_10: 0.9829 - val_recall_10: 0.9815 - val_f2_score: 0.9818
Epoch 86/100
50/50 [==============================] - 1s 23ms/step - loss: 0.0424 - accuracy: 0.9853 - precision_10: 0.9854 - recall_10: 0.9848 - f2_score: 0.9849 - val_loss: 0.0478 - val_accuracy: 0.9787 - val_precision_10: 0.9787 - val_recall_10: 0.9787 - val_f2_score: 0.9787
Epoch 87/100
50/50 [==============================] - 1s 21ms/step - loss: 0.0476 - accuracy: 0.9810 - precision_10: 0.9824 - recall_10: 0.9805 - f2_score: 0.9809 - val_loss: 0.0603 - val_accuracy: 0.9772 - val_precision_10: 0.9772 - val_recall_10: 0.9772 - val_f2_score: 0.9772
Epoch 88/100
50/50 [==============================] - 2s 36ms/step - loss: 0.0509 - accuracy: 0.9810 - precision_10: 0.9818 - recall_10: 0.9805 - f2_score: 0.9808 - val_loss: 0.0605 - val_accuracy: 0.9801 - val_precision_10: 0.9801 - val_recall_10: 0.9801 - val_f2_score: 0.9801
Epoch 89/100
50/50 [==============================] - 1s 21ms/step - loss: 0.0665 - accuracy: 0.9748 - precision_10: 0.9759 - recall_10: 0.9741 - f2_score: 0.9744 - val_loss: 0.0651 - val_accuracy: 0.9716 - val_precision_10: 0.9729 - val_recall_10: 0.9716 - val_f2_score: 0.9718
Epoch 90/100
50/50 [==============================] - 1s 23ms/step - loss: 0.0566 - accuracy: 0.9809 - precision_10: 0.9818 - recall_10: 0.9799 - f2_score: 0.9803 - val_loss: 0.0765 - val_accuracy: 0.9744 - val_precision_10: 0.9744 - val_recall_10: 0.9744 - val_f2_score: 0.9744
Epoch 91/100
50/50 [==============================] - 2s 32ms/step - loss: 0.0449 - accuracy: 0.9840 - precision_10: 0.9848 - recall_10: 0.9835 - f2_score: 0.9838 - val_loss: 0.0820 - val_accuracy: 0.9701 - val_precision_10: 0.9715 - val_recall_10: 0.9687 - val_f2_score: 0.9693
Epoch 92/100
50/50 [==============================] - 1s 24ms/step - loss: 0.0406 - accuracy: 0.9842 - precision_10: 0.9848 - recall_10: 0.9834 - f2_score: 0.9837 - val_loss: 0.0742 - val_accuracy: 0.9815 - val_precision_10: 0.9815 - val_recall_10: 0.9815 - val_f2_score: 0.9815
Epoch 93/100
50/50 [==============================] - 2s 42ms/step - loss: 0.0605 - accuracy: 0.9771 - precision_10: 0.9781 - recall_10: 0.9759 - f2_score: 0.9764 - val_loss: 0.0772 - val_accuracy: 0.9701 - val_precision_10: 0.9701 - val_recall_10: 0.9687 - val_f2_score: 0.9690
Epoch 94/100
50/50 [==============================] - 1s 24ms/step - loss: 0.0618 - accuracy: 0.9786 - precision_10: 0.9794 - recall_10: 0.9775 - f2_score: 0.9779 - val_loss: 0.0499 - val_accuracy: 0.9772 - val_precision_10: 0.9786 - val_recall_10: 0.9772 - val_f2_score: 0.9775
Epoch 95/100
50/50 [==============================] - 1s 20ms/step - loss: 0.0544 - accuracy: 0.9807 - precision_10: 0.9810 - recall_10: 0.9799 - f2_score: 0.9801 - val_loss: 0.1012 - val_accuracy: 0.9701 - val_precision_10: 0.9715 - val_recall_10: 0.9687 - val_f2_score: 0.9693
Epoch 96/100
50/50 [==============================] - 1s 26ms/step - loss: 0.0627 - accuracy: 0.9774 - precision_10: 0.9784 - recall_10: 0.9763 - f2_score: 0.9767 - val_loss: 0.0487 - val_accuracy: 0.9787 - val_precision_10: 0.9801 - val_recall_10: 0.9787 - val_f2_score: 0.9789
Epoch 97/100
50/50 [==============================] - 1s 21ms/step - loss: 0.0526 - accuracy: 0.9828 - precision_10: 0.9835 - recall_10: 0.9820 - f2_score: 0.9823 - val_loss: 0.0922 - val_accuracy: 0.9716 - val_precision_10: 0.9716 - val_recall_10: 0.9716 - val_f2_score: 0.9716
Epoch 98/100
50/50 [==============================] - 2s 41ms/step - loss: 0.0514 - accuracy: 0.9826 - precision_10: 0.9837 - recall_10: 0.9820 - f2_score: 0.9823 - val_loss: 0.0566 - val_accuracy: 0.9858 - val_precision_10: 0.9858 - val_recall_10: 0.9858 - val_f2_score: 0.9858
Epoch 99/100
50/50 [==============================] - 1s 23ms/step - loss: 0.0458 - accuracy: 0.9831 - precision_10: 0.9837 - recall_10: 0.9828 - f2_score: 0.9829 - val_loss: 0.0776 - val_accuracy: 0.9772 - val_precision_10: 0.9772 - val_recall_10: 0.9772 - val_f2_score: 0.9772
Epoch 100/100
50/50 [==============================] - 1s 20ms/step - loss: 0.0529 - accuracy: 0.9823 - precision_10: 0.9830 - recall_10: 0.9816 - f2_score: 0.9819 - val_loss: 0.0557 - val_accuracy: 0.9787 - val_precision_10: 0.9787 - val_recall_10: 0.9787 - val_f2_score: 0.9787
In [ ]:
# Plot training F2
plt.figure(figsize = (10, 4))
plt.subplot(1, 2, 1)
plt.plot(H2B.history['f2_score'], label = 'training')

# Plot validation F2 Score
plt.ylabel('Accuracy %')
plt.title('Training')
plt.plot(H2B.history['val_f2_score'], label = 'validation')
plt.title('F2 Score')
plt.legend()

# Plot Training Loss
plt.subplot(1, 2, 2)
plt.plot(H2B.history['loss'], label = 'training')
plt.ylabel('Training Loss')
plt.xlabel('epochs')

# Plot Validation Loss
plt.plot(H2B.history['val_loss'], label = 'validation')
plt.xlabel('epochs')
plt.title('Loss')
plt.legend()
plt.show()

# After training, predict classes on the test set
y_pred = Conv2B.predict(X_test)
y_pred_classes = np.argmax(y_pred, axis=1)
y_true_classes = np.argmax(y_test_encoded, axis=1)

# Generate the confusion matrix
cm = confusion_matrix(y_true_classes, y_pred_classes)

# Plotting the confusion matrix
fig, ax = plt.subplots(figsize=(8, 8))
ax.matshow(cm, cmap=plt.cm.Blues, alpha=0.3)
for i in range(cm.shape[0]):
    for j in range(cm.shape[1]):
        ax.text(x=j, y=i, s=cm[i, j], va='center', ha='center')

plt.xlabel('Predicted labels')
plt.ylabel('True labels')
plt.title('Confusion Matrix')
plt.show()

# Plot ROC and calculate AUC (CNN)
plot_roc_curve(y_test, y_pred, NUM_CLASSES, 'Conv2B')

# Print the scores
print(f"Validation Scores:\n\tF2 Score: {H2B.history['val_f2_score']}\n\tRecall: {H2B.history['val_recall_10']}")
print(f"\tPrecision: {H2B.history['val_precision_10']}\n\tAccuracy: {H2B.history['val_accuracy']}")
No description has been provided for this image
22/22 [==============================] - 0s 8ms/step
No description has been provided for this image
No description has been provided for this image
Validation Scores:
	F2 Score: [0.46603715419769287, 0.5929149985313416, 0.679411768913269, 0.7128159403800964, 0.761090099811554, 0.7666761875152588, 0.7628836035728455, 0.7909116744995117, 0.7668007016181946, 0.8160621523857117, 0.8295129537582397, 0.8493268489837646, 0.8644793629646301, 0.840011477470398, 0.8815112113952637, 0.8746774196624756, 0.8965222835540771, 0.8921933174133301, 0.9215405583381653, 0.9225934743881226, 0.9375891089439392, 0.9219151139259338, 0.904666543006897, 0.933314323425293, 0.9228572249412537, 0.9384791254997253, 0.9372325539588928, 0.9253986477851868, 0.9455995559692383, 0.9441754221916199, 0.9055903553962708, 0.9536008834838867, 0.949871838092804, 0.9512959122657776, 0.9655369520187378, 0.9698092341423035, 0.9532630443572998, 0.9435575604438782, 0.9612646102905273, 0.9638382792472839, 0.9413272738456726, 0.9683850407600403, 0.9552960991859436, 0.9706802368164062, 0.9738041162490845, 0.9658120274543762, 0.9590210914611816, 0.9672363996505737, 0.9607173800468445, 0.9715505838394165, 0.9661354422569275, 0.9678336977958679, 0.9664105772972107, 0.9661354422569275, 0.9652619957923889, 0.9775184988975525, 0.9860557913780212, 0.9789413213729858, 0.9743954539299011, 0.9766515493392944, 0.9649872183799744, 0.9718270897865295, 0.9604438543319702, 0.9618668556213379, 0.9789413213729858, 0.9504837393760681, 0.9815077781677246, 0.9681093692779541, 0.9800854325294495, 0.9743954539299011, 0.9815077781677246, 0.9815077781677246, 0.9758179187774658, 0.9758179187774658, 0.9860557913780212, 0.9815077781677246, 0.9587481617927551, 0.9829303622245789, 0.9718270897865295, 0.9701279997825623, 0.9800854325294495, 0.9743954539299011, 0.9789413213729858, 0.9758179187774658, 0.9817870855331421, 0.97866290807724, 0.9772403836250305, 0.9800854325294495, 0.9718270897865295, 0.9743954539299011, 0.9692569971084595, 0.9815077781677246, 0.9689812064170837, 0.9775184988975525, 0.9692569971084595, 0.9789413213729858, 0.9715505838394165, 0.9857752323150635, 0.9772403836250305, 0.97866290807724]
	Recall: [0.43527737259864807, 0.5618776679039001, 0.6571835279464722, 0.6899003982543945, 0.7467994093894958, 0.7553342580795288, 0.74964439868927, 0.7823613286018372, 0.7596017122268677, 0.8065434098243713, 0.8236131072044373, 0.8435277342796326, 0.8620198965072632, 0.8349928855895996, 0.8762446641921997, 0.8677098155021667, 0.8947368264198303, 0.8876244425773621, 0.9189189076423645, 0.9189189076423645, 0.9359886050224304, 0.9203413724899292, 0.8990042805671692, 0.9317212104797363, 0.9189189076423645, 0.9374110698699951, 0.9345661401748657, 0.9246088266372681, 0.9445234537124634, 0.9431009888648987, 0.9032716751098633, 0.9530583024024963, 0.9487909078598022, 0.9502133727073669, 0.9644381403923035, 0.9687055349349976, 0.9516358375549316, 0.941678524017334, 0.9601706862449646, 0.9630156755447388, 0.9402560591697693, 0.9672830700874329, 0.954480767250061, 0.9701279997825623, 0.9729729890823364, 0.9644381403923035, 0.9587482213973999, 0.9658606052398682, 0.9601706862449646, 0.9715505242347717, 0.9658606052398682, 0.9672830700874329, 0.9658606052398682, 0.9658606052398682, 0.9644381403923035, 0.9772403836250305, 0.9857752323150635, 0.9786628484725952, 0.9743954539299011, 0.9758179187774658, 0.9644381403923035, 0.9715505242347717, 0.9601706862449646, 0.9615931510925293, 0.9786628484725952, 0.9502133727073669, 0.9815078377723694, 0.9672830700874329, 0.9800853729248047, 0.9743954539299011, 0.9815078377723694, 0.9815078377723694, 0.9758179187774658, 0.9758179187774658, 0.9857752323150635, 0.9815078377723694, 0.9587482213973999, 0.9829303026199341, 0.9715505242347717, 0.9701279997825623, 0.9800853729248047, 0.9743954539299011, 0.9786628484725952, 0.9758179187774658, 0.9815078377723694, 0.9786628484725952, 0.9772403836250305, 0.9800853729248047, 0.9715505242347717, 0.9743954539299011, 0.9687055349349976, 0.9815078377723694, 0.9687055349349976, 0.9772403836250305, 0.9687055349349976, 0.9786628484725952, 0.9715505242347717, 0.9857752323150635, 0.9772403836250305, 0.9786628484725952]
	Precision: [0.6496815085411072, 0.7610790133476257, 0.7857142686843872, 0.8220338821411133, 0.8241758346557617, 0.8156682252883911, 0.8208722472190857, 0.8270676732063293, 0.7970149517059326, 0.8564954400062561, 0.8539823293685913, 0.8733431696891785, 0.8744588494300842, 0.8607038259506226, 0.9032257795333862, 0.9037036895751953, 0.9037356376647949, 0.9109489321708679, 0.9321789145469666, 0.9375907182693481, 0.944045901298523, 0.9282639622688293, 0.9280470013618469, 0.9397417306900024, 0.9389534592628479, 0.9427753686904907, 0.948051929473877, 0.9285714030265808, 0.9499284625053406, 0.9484978318214417, 0.9149855971336365, 0.9557774662971497, 0.9542202949523926, 0.9556509256362915, 0.9699570536613464, 0.9742489457130432, 0.9598278403282166, 0.9511494040489197, 0.9656652212142944, 0.9671428799629211, 0.9456366300582886, 0.9728183150291443, 0.9585714340209961, 0.9728958606719971, 0.977142870426178, 0.9713466763496399, 0.9601139426231384, 0.972779393196106, 0.9629101157188416, 0.9715505242347717, 0.9672364592552185, 0.9700428247451782, 0.968616247177124, 0.9672364592552185, 0.9685714244842529, 0.9786324501037598, 0.9871794581413269, 0.9800570011138916, 0.9743954539299011, 0.9800000190734863, 0.9671897292137146, 0.9729344844818115, 0.9615384340286255, 0.9629629850387573, 0.9800570011138916, 0.9515669345855713, 0.9815078377723694, 0.9714285731315613, 0.9800853729248047, 0.9743954539299011, 0.9815078377723694, 0.9815078377723694, 0.9758179187774658, 0.9758179187774658, 0.9871794581413269, 0.9815078377723694, 0.9587482213973999, 0.9829303026199341, 0.9729344844818115, 0.9701279997825623, 0.9800853729248047, 0.9743954539299011, 0.9800570011138916, 0.9758179187774658, 0.9829059839248657, 0.9786628484725952, 0.9772403836250305, 0.9800853729248047, 0.9729344844818115, 0.9743954539299011, 0.9714693427085876, 0.9815078377723694, 0.9700854420661926, 0.9786324501037598, 0.9714693427085876, 0.9800570011138916, 0.9715505242347717, 0.9857752323150635, 0.9772403836250305, 0.9786628484725952]
	Accuracy: [0.6002845168113708, 0.682788074016571, 0.7311521768569946, 0.77098149061203, 0.7923186421394348, 0.7908961772918701, 0.7766714096069336, 0.8108108043670654, 0.779516339302063, 0.8349928855895996, 0.8392603397369385, 0.8577525019645691, 0.8691322803497314, 0.8492176532745361, 0.8890469670295715, 0.8890469670295715, 0.9004267454147339, 0.9032716751098633, 0.9189189076423645, 0.9302987456321716, 0.9359886050224304, 0.9231863617897034, 0.9174964427947998, 0.933143675327301, 0.9317212104797363, 0.9374110698699951, 0.9445234537124634, 0.9274537563323975, 0.9473684430122375, 0.9473684430122375, 0.9061166644096375, 0.9530583024024963, 0.9502133727073669, 0.9516358375549316, 0.9687055349349976, 0.9701279997825623, 0.9559032917022705, 0.9473684430122375, 0.9615931510925293, 0.9644381403923035, 0.941678524017334, 0.9672830700874329, 0.9573257565498352, 0.9701279997825623, 0.9729729890823364, 0.9672830700874329, 0.9587482213973999, 0.9687055349349976, 0.9630156755447388, 0.9715505242347717, 0.9672830700874329, 0.9701279997825623, 0.9672830700874329, 0.9658606052398682, 0.9644381403923035, 0.9772403836250305, 0.9857752323150635, 0.9786628484725952, 0.9743954539299011, 0.9786628484725952, 0.9672830700874329, 0.9729729890823364, 0.9601706862449646, 0.9615931510925293, 0.9786628484725952, 0.9502133727073669, 0.9815078377723694, 0.9715505242347717, 0.9800853729248047, 0.9743954539299011, 0.9815078377723694, 0.9815078377723694, 0.9758179187774658, 0.9758179187774658, 0.9871976971626282, 0.9815078377723694, 0.9587482213973999, 0.9829303026199341, 0.9715505242347717, 0.9701279997825623, 0.9800853729248047, 0.9743954539299011, 0.9800853729248047, 0.9758179187774658, 0.9829303026199341, 0.9786628484725952, 0.9772403836250305, 0.9800853729248047, 0.9715505242347717, 0.9743954539299011, 0.9701279997825623, 0.9815078377723694, 0.9701279997825623, 0.9772403836250305, 0.9701279997825623, 0.9786628484725952, 0.9715505242347717, 0.9857752323150635, 0.9772403836250305, 0.9786628484725952]
  • F2 Score: 0.9787
  • Recall: 0.9787
  • Precision: 0.9787
  • Accuracy: 0.9787

Comparing the current model's (Conv2B) confusion matrix and performance metrics with the previous one (Conv2A), we observe the following:

  • Glioma (0): Both models are very effective in classifying Glioma. The current model has 156 true positives with 5 false negatives, while the previous model had 157 true positives and 4 false negatives. They are nearly identical in performance for this category.

  • Meningioma (1): The current model slightly underperforms compared to the previous model for Meningioma, with 159 true positives against 161 from the previous model and 6 false negatives compared to only 1 in the previous model. This could be an area of focus for further refinement.

  • No Tumor (2): Both models show exemplary performance for 'No Tumor' cases, with perfect classification in the current model and only 1 false negative in the previous model.

  • Pituitary (3): The current model shows a slight improvement in classifying Pituitary tumors with 173 true positives and 3 false negatives, compared to 175 true positives and 3 false negatives in the previous model.

The F2 score, which emphasizes the importance of recall, is 0.9787 for the current model, which is marginally lower than the 0.9858 of the previous model. This suggests that the previous model was slightly more effective in terms of balancing the recall and precision while placing a higher emphasis on recall.

It is worth noting that in both models, the F2 score, recall, precision, and accuracy metrics are identically high and have very close values, which indicates both models are well-optimized across these metrics. However, the previous model has a slight edge in performance across all categories.

In [ ]:
# Parameters
f = 4           # No. Filters
l = 6           # No. Layers
k = 5           # Kernel Size (k x k)
lam = 0.0000001 # Kernel Regularization Constant (L2)

# Initialize Sequential Network
Conv2C = Sequential()

# Add Augmentations Directly
# Horizontal Flip, 10% Rotation, 10% Move, Brightness / Contrast Adjust 
Conv2C.add( RandomFlip("horizontal") )
Conv2C.add( RandomRotation(0.1) )
Conv2C.add( RandomTranslation(height_factor = 0.1, width_factor = 0.1) )
Conv2C.add( RandomBrightness(factor = 0.1, value_range = (0.0, 1.0)) )
Conv2C.add( RandomContrast(0.1) ) 

# Add Multiple Layers (Changeable)
for i in range(l):
  
    # Add Convolutional Layer, Follow With Pooling
    # Note: Loosely Following Unet Architecture
    Conv2C.add(Conv2D(filters = (f * 2 ** i),
                    input_shape = (IMG_SIZE, IMG_SIZE, 1),
                    kernel_size = (k, k), 
                    kernel_regularizer = l2(lam),
                    kernel_initializer = 'he_uniform',
                    padding = 'same', 
                    activation = 'relu',
                    data_format = 'channels_last'))
    Conv2C.add(MaxPooling2D(pool_size = (2, 2), data_format = 'channels_last'))

# Flatten After Convolutional Layers
Conv2C.add(Flatten())

# Added Dropout Regularization - Intepret Model Changes
Conv2C.add(Dropout(0.75))
Conv2C.add(Dense(NUM_CLASSES, activation = 'softmax', 
                kernel_initializer = 'glorot_uniform',
                kernel_regularizer = l2(lam)
                ))
In [ ]:
# Train With CC, Adam
Conv2C.compile(loss = 'categorical_crossentropy',
               optimizer = 'adam',
               metrics = ['accuracy', Precision(), Recall(), F2Score()])

# Build Model With Basic Parameters (Build For Grayscale Images)
Conv2C.build((None, IMG_SIZE, IMG_SIZE, 1))
Conv2C.summary()

# Fit Model (High Patience For Full Convergence)
H2C = Conv2C.fit(X_train, y_train_encoded, 
          batch_size = 128,
          epochs = 100, 
          verbose = 1,
          validation_data = (X_test, y_test_encoded))
Model: "sequential_5"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 random_flip_5 (RandomFlip)  (None, 256, 256, 1)       0         
                                                                 
 random_rotation_5 (RandomR  (None, 256, 256, 1)       0         
 otation)                                                        
                                                                 
 random_translation_5 (Rand  (None, 256, 256, 1)       0         
 omTranslation)                                                  
                                                                 
 random_brightness_5 (Rando  (None, 256, 256, 1)       0         
 mBrightness)                                                    
                                                                 
 random_contrast_5 (RandomC  (None, 256, 256, 1)       0         
 ontrast)                                                        
                                                                 
 conv2d_24 (Conv2D)          (None, 256, 256, 4)       104       
                                                                 
 max_pooling2d_24 (MaxPooli  (None, 128, 128, 4)       0         
 ng2D)                                                           
                                                                 
 conv2d_25 (Conv2D)          (None, 128, 128, 8)       808       
                                                                 
 max_pooling2d_25 (MaxPooli  (None, 64, 64, 8)         0         
 ng2D)                                                           
                                                                 
 conv2d_26 (Conv2D)          (None, 64, 64, 16)        3216      
                                                                 
 max_pooling2d_26 (MaxPooli  (None, 32, 32, 16)        0         
 ng2D)                                                           
                                                                 
 conv2d_27 (Conv2D)          (None, 32, 32, 32)        12832     
                                                                 
 max_pooling2d_27 (MaxPooli  (None, 16, 16, 32)        0         
 ng2D)                                                           
                                                                 
 conv2d_28 (Conv2D)          (None, 16, 16, 64)        51264     
                                                                 
 max_pooling2d_28 (MaxPooli  (None, 8, 8, 64)          0         
 ng2D)                                                           
                                                                 
 conv2d_29 (Conv2D)          (None, 8, 8, 128)         204928    
                                                                 
 max_pooling2d_29 (MaxPooli  (None, 4, 4, 128)         0         
 ng2D)                                                           
                                                                 
 flatten_5 (Flatten)         (None, 2048)              0         
                                                                 
 dropout_1 (Dropout)         (None, 2048)              0         
                                                                 
 dense_5 (Dense)             (None, 4)                 8196      
                                                                 
=================================================================
Total params: 281348 (1.07 MB)
Trainable params: 281348 (1.07 MB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
Epoch 1/100
50/50 [==============================] - 5s 29ms/step - loss: 1.2788 - accuracy: 0.4011 - precision_12: 0.6052 - recall_12: 0.1206 - f2_score: 0.1436 - val_loss: 0.9279 - val_accuracy: 0.5804 - val_precision_12: 0.7261 - val_recall_12: 0.3883 - val_f2_score: 0.4282
Epoch 2/100
50/50 [==============================] - 1s 22ms/step - loss: 0.9460 - accuracy: 0.5915 - precision_12: 0.6961 - recall_12: 0.4005 - f2_score: 0.4376 - val_loss: 0.7436 - val_accuracy: 0.6671 - val_precision_12: 0.7350 - val_recall_12: 0.5761 - val_f2_score: 0.6021
Epoch 3/100
50/50 [==============================] - 1s 21ms/step - loss: 0.8100 - accuracy: 0.6693 - precision_12: 0.7436 - recall_12: 0.5264 - f2_score: 0.5591 - val_loss: 0.7573 - val_accuracy: 0.6856 - val_precision_12: 0.7259 - val_recall_12: 0.6330 - val_f2_score: 0.6496
Epoch 4/100
50/50 [==============================] - 1s 22ms/step - loss: 0.7294 - accuracy: 0.7047 - precision_12: 0.7662 - recall_12: 0.6025 - f2_score: 0.6294 - val_loss: 0.5748 - val_accuracy: 0.7511 - val_precision_12: 0.7769 - val_recall_12: 0.6984 - val_f2_score: 0.7128
Epoch 5/100
50/50 [==============================] - 2s 34ms/step - loss: 0.6667 - accuracy: 0.7258 - precision_12: 0.7734 - recall_12: 0.6519 - f2_score: 0.6730 - val_loss: 0.5753 - val_accuracy: 0.7326 - val_precision_12: 0.7755 - val_recall_12: 0.7127 - val_f2_score: 0.7244
Epoch 6/100
50/50 [==============================] - 1s 21ms/step - loss: 0.6077 - accuracy: 0.7517 - precision_12: 0.7933 - recall_12: 0.6994 - f2_score: 0.7163 - val_loss: 0.5414 - val_accuracy: 0.7866 - val_precision_12: 0.8175 - val_recall_12: 0.7326 - val_f2_score: 0.7481
Epoch 7/100
50/50 [==============================] - 1s 21ms/step - loss: 0.6040 - accuracy: 0.7555 - precision_12: 0.7936 - recall_12: 0.6998 - f2_score: 0.7168 - val_loss: 0.4886 - val_accuracy: 0.7952 - val_precision_12: 0.8289 - val_recall_12: 0.7511 - val_f2_score: 0.7654
Epoch 8/100
50/50 [==============================] - 1s 22ms/step - loss: 0.5589 - accuracy: 0.7858 - precision_12: 0.8190 - recall_12: 0.7309 - f2_score: 0.7469 - val_loss: 0.4343 - val_accuracy: 0.8151 - val_precision_12: 0.8358 - val_recall_12: 0.7966 - val_f2_score: 0.8041
Epoch 9/100
50/50 [==============================] - 1s 22ms/step - loss: 0.5253 - accuracy: 0.7919 - precision_12: 0.8248 - recall_12: 0.7511 - f2_score: 0.7648 - val_loss: 0.5276 - val_accuracy: 0.7980 - val_precision_12: 0.8220 - val_recall_12: 0.7425 - val_f2_score: 0.7572
Epoch 10/100
50/50 [==============================] - 2s 46ms/step - loss: 0.4741 - accuracy: 0.8123 - precision_12: 0.8411 - recall_12: 0.7755 - f2_score: 0.7878 - val_loss: 0.3845 - val_accuracy: 0.8350 - val_precision_12: 0.8531 - val_recall_12: 0.8179 - val_f2_score: 0.8247
Epoch 11/100
50/50 [==============================] - 2s 30ms/step - loss: 0.4503 - accuracy: 0.8307 - precision_12: 0.8535 - recall_12: 0.8022 - f2_score: 0.8120 - val_loss: 0.4024 - val_accuracy: 0.8407 - val_precision_12: 0.8671 - val_recall_12: 0.8165 - val_f2_score: 0.8261
Epoch 12/100
50/50 [==============================] - 1s 24ms/step - loss: 0.4469 - accuracy: 0.8335 - precision_12: 0.8580 - recall_12: 0.8030 - f2_score: 0.8134 - val_loss: 0.3228 - val_accuracy: 0.8706 - val_precision_12: 0.8759 - val_recall_12: 0.8634 - val_f2_score: 0.8659
Epoch 13/100
50/50 [==============================] - 1s 21ms/step - loss: 0.4111 - accuracy: 0.8422 - precision_12: 0.8629 - recall_12: 0.8188 - f2_score: 0.8273 - val_loss: 0.3536 - val_accuracy: 0.8578 - val_precision_12: 0.8750 - val_recall_12: 0.8464 - val_f2_score: 0.8519
Epoch 14/100
50/50 [==============================] - 1s 21ms/step - loss: 0.4018 - accuracy: 0.8449 - precision_12: 0.8680 - recall_12: 0.8218 - f2_score: 0.8307 - val_loss: 0.2712 - val_accuracy: 0.8962 - val_precision_12: 0.9028 - val_recall_12: 0.8848 - val_f2_score: 0.8883
Epoch 15/100
50/50 [==============================] - 2s 43ms/step - loss: 0.3514 - accuracy: 0.8706 - precision_12: 0.8852 - recall_12: 0.8543 - f2_score: 0.8603 - val_loss: 0.2970 - val_accuracy: 0.8819 - val_precision_12: 0.9009 - val_recall_12: 0.8791 - val_f2_score: 0.8834
Epoch 16/100
50/50 [==============================] - 1s 22ms/step - loss: 0.3302 - accuracy: 0.8790 - precision_12: 0.8961 - recall_12: 0.8612 - f2_score: 0.8680 - val_loss: 0.2579 - val_accuracy: 0.8976 - val_precision_12: 0.9139 - val_recall_12: 0.8905 - val_f2_score: 0.8951
Epoch 17/100
50/50 [==============================] - 2s 34ms/step - loss: 0.3184 - accuracy: 0.8858 - precision_12: 0.8983 - recall_12: 0.8693 - f2_score: 0.8750 - val_loss: 0.1989 - val_accuracy: 0.9303 - val_precision_12: 0.9405 - val_recall_12: 0.9218 - val_f2_score: 0.9254
Epoch 18/100
50/50 [==============================] - 1s 21ms/step - loss: 0.3086 - accuracy: 0.8862 - precision_12: 0.8994 - recall_12: 0.8669 - f2_score: 0.8732 - val_loss: 0.2695 - val_accuracy: 0.8990 - val_precision_12: 0.9175 - val_recall_12: 0.8862 - val_f2_score: 0.8923
Epoch 19/100
50/50 [==============================] - 1s 28ms/step - loss: 0.2893 - accuracy: 0.8981 - precision_12: 0.9086 - recall_12: 0.8840 - f2_score: 0.8888 - val_loss: 0.2482 - val_accuracy: 0.9075 - val_precision_12: 0.9180 - val_recall_12: 0.8919 - val_f2_score: 0.8970
Epoch 20/100
50/50 [==============================] - 2s 36ms/step - loss: 0.2719 - accuracy: 0.9044 - precision_12: 0.9150 - recall_12: 0.8907 - f2_score: 0.8954 - val_loss: 0.2075 - val_accuracy: 0.9161 - val_precision_12: 0.9192 - val_recall_12: 0.9061 - val_f2_score: 0.9087
Epoch 21/100
50/50 [==============================] - 1s 21ms/step - loss: 0.2567 - accuracy: 0.9057 - precision_12: 0.9171 - recall_12: 0.8932 - f2_score: 0.8979 - val_loss: 0.1704 - val_accuracy: 0.9289 - val_precision_12: 0.9361 - val_recall_12: 0.9175 - val_f2_score: 0.9212
Epoch 22/100
50/50 [==============================] - 1s 21ms/step - loss: 0.2634 - accuracy: 0.8972 - precision_12: 0.9090 - recall_12: 0.8881 - f2_score: 0.8922 - val_loss: 0.1952 - val_accuracy: 0.9275 - val_precision_12: 0.9323 - val_recall_12: 0.9203 - val_f2_score: 0.9227
Epoch 23/100
50/50 [==============================] - 1s 22ms/step - loss: 0.2374 - accuracy: 0.9153 - precision_12: 0.9237 - recall_12: 0.9041 - f2_score: 0.9080 - val_loss: 0.1841 - val_accuracy: 0.9275 - val_precision_12: 0.9391 - val_recall_12: 0.9218 - val_f2_score: 0.9252
Epoch 24/100
50/50 [==============================] - 1s 23ms/step - loss: 0.2628 - accuracy: 0.9051 - precision_12: 0.9156 - recall_12: 0.8924 - f2_score: 0.8969 - val_loss: 0.1925 - val_accuracy: 0.9175 - val_precision_12: 0.9251 - val_recall_12: 0.9132 - val_f2_score: 0.9156
Epoch 25/100
50/50 [==============================] - 2s 35ms/step - loss: 0.2273 - accuracy: 0.9180 - precision_12: 0.9284 - recall_12: 0.9090 - f2_score: 0.9128 - val_loss: 0.1617 - val_accuracy: 0.9317 - val_precision_12: 0.9342 - val_recall_12: 0.9289 - val_f2_score: 0.9299
Epoch 26/100
50/50 [==============================] - 1s 27ms/step - loss: 0.2289 - accuracy: 0.9185 - precision_12: 0.9251 - recall_12: 0.9093 - f2_score: 0.9125 - val_loss: 0.1592 - val_accuracy: 0.9488 - val_precision_12: 0.9563 - val_recall_12: 0.9331 - val_f2_score: 0.9377
Epoch 27/100
50/50 [==============================] - 1s 21ms/step - loss: 0.2108 - accuracy: 0.9193 - precision_12: 0.9293 - recall_12: 0.9127 - f2_score: 0.9159 - val_loss: 0.1719 - val_accuracy: 0.9289 - val_precision_12: 0.9338 - val_recall_12: 0.9232 - val_f2_score: 0.9253
Epoch 28/100
50/50 [==============================] - 1s 25ms/step - loss: 0.2079 - accuracy: 0.9271 - precision_12: 0.9345 - recall_12: 0.9193 - f2_score: 0.9223 - val_loss: 0.1462 - val_accuracy: 0.9474 - val_precision_12: 0.9498 - val_recall_12: 0.9417 - val_f2_score: 0.9433
Epoch 29/100
50/50 [==============================] - 1s 26ms/step - loss: 0.1908 - accuracy: 0.9294 - precision_12: 0.9374 - recall_12: 0.9241 - f2_score: 0.9267 - val_loss: 0.1283 - val_accuracy: 0.9459 - val_precision_12: 0.9485 - val_recall_12: 0.9431 - val_f2_score: 0.9442
Epoch 30/100
50/50 [==============================] - 2s 40ms/step - loss: 0.1865 - accuracy: 0.9305 - precision_12: 0.9357 - recall_12: 0.9241 - f2_score: 0.9264 - val_loss: 0.1277 - val_accuracy: 0.9516 - val_precision_12: 0.9556 - val_recall_12: 0.9488 - val_f2_score: 0.9501
Epoch 31/100
50/50 [==============================] - 1s 22ms/step - loss: 0.1797 - accuracy: 0.9342 - precision_12: 0.9402 - recall_12: 0.9282 - f2_score: 0.9305 - val_loss: 0.1506 - val_accuracy: 0.9431 - val_precision_12: 0.9509 - val_recall_12: 0.9374 - val_f2_score: 0.9401
Epoch 32/100
50/50 [==============================] - 1s 21ms/step - loss: 0.1720 - accuracy: 0.9362 - precision_12: 0.9441 - recall_12: 0.9294 - f2_score: 0.9323 - val_loss: 0.1427 - val_accuracy: 0.9474 - val_precision_12: 0.9500 - val_recall_12: 0.9459 - val_f2_score: 0.9468
Epoch 33/100
50/50 [==============================] - 1s 21ms/step - loss: 0.1786 - accuracy: 0.9332 - precision_12: 0.9394 - recall_12: 0.9275 - f2_score: 0.9299 - val_loss: 0.1184 - val_accuracy: 0.9559 - val_precision_12: 0.9599 - val_recall_12: 0.9545 - val_f2_score: 0.9556
Epoch 34/100
50/50 [==============================] - 1s 21ms/step - loss: 0.1603 - accuracy: 0.9432 - precision_12: 0.9487 - recall_12: 0.9384 - f2_score: 0.9405 - val_loss: 0.1115 - val_accuracy: 0.9587 - val_precision_12: 0.9598 - val_recall_12: 0.9516 - val_f2_score: 0.9533
Epoch 35/100
50/50 [==============================] - 1s 21ms/step - loss: 0.1532 - accuracy: 0.9435 - precision_12: 0.9498 - recall_12: 0.9400 - f2_score: 0.9420 - val_loss: 0.1035 - val_accuracy: 0.9630 - val_precision_12: 0.9656 - val_recall_12: 0.9587 - val_f2_score: 0.9601
Epoch 36/100
50/50 [==============================] - 2s 38ms/step - loss: 0.1428 - accuracy: 0.9503 - precision_12: 0.9551 - recall_12: 0.9449 - f2_score: 0.9469 - val_loss: 0.1440 - val_accuracy: 0.9417 - val_precision_12: 0.9442 - val_recall_12: 0.9388 - val_f2_score: 0.9399
Epoch 37/100
50/50 [==============================] - 1s 24ms/step - loss: 0.1700 - accuracy: 0.9384 - precision_12: 0.9442 - recall_12: 0.9315 - f2_score: 0.9340 - val_loss: 0.1242 - val_accuracy: 0.9445 - val_precision_12: 0.9484 - val_recall_12: 0.9417 - val_f2_score: 0.9430
Epoch 38/100
50/50 [==============================] - 1s 24ms/step - loss: 0.1671 - accuracy: 0.9424 - precision_12: 0.9488 - recall_12: 0.9362 - f2_score: 0.9387 - val_loss: 0.0902 - val_accuracy: 0.9673 - val_precision_12: 0.9673 - val_recall_12: 0.9673 - val_f2_score: 0.9673
Epoch 39/100
50/50 [==============================] - 1s 22ms/step - loss: 0.1582 - accuracy: 0.9451 - precision_12: 0.9493 - recall_12: 0.9394 - f2_score: 0.9414 - val_loss: 0.0813 - val_accuracy: 0.9772 - val_precision_12: 0.9771 - val_recall_12: 0.9716 - val_f2_score: 0.9727
Epoch 40/100
50/50 [==============================] - 1s 25ms/step - loss: 0.1546 - accuracy: 0.9457 - precision_12: 0.9504 - recall_12: 0.9427 - f2_score: 0.9442 - val_loss: 0.1612 - val_accuracy: 0.9431 - val_precision_12: 0.9428 - val_recall_12: 0.9374 - val_f2_score: 0.9385
Epoch 41/100
50/50 [==============================] - 2s 34ms/step - loss: 0.1406 - accuracy: 0.9465 - precision_12: 0.9502 - recall_12: 0.9419 - f2_score: 0.9436 - val_loss: 0.1113 - val_accuracy: 0.9587 - val_precision_12: 0.9615 - val_recall_12: 0.9587 - val_f2_score: 0.9593
Epoch 42/100
50/50 [==============================] - 1s 26ms/step - loss: 0.1395 - accuracy: 0.9487 - precision_12: 0.9531 - recall_12: 0.9460 - f2_score: 0.9475 - val_loss: 0.0914 - val_accuracy: 0.9644 - val_precision_12: 0.9671 - val_recall_12: 0.9630 - val_f2_score: 0.9638
Epoch 43/100
50/50 [==============================] - 1s 24ms/step - loss: 0.1433 - accuracy: 0.9456 - precision_12: 0.9493 - recall_12: 0.9422 - f2_score: 0.9437 - val_loss: 0.0987 - val_accuracy: 0.9587 - val_precision_12: 0.9614 - val_recall_12: 0.9559 - val_f2_score: 0.9570
Epoch 44/100
50/50 [==============================] - 1s 22ms/step - loss: 0.1349 - accuracy: 0.9502 - precision_12: 0.9527 - recall_12: 0.9465 - f2_score: 0.9477 - val_loss: 0.0829 - val_accuracy: 0.9659 - val_precision_12: 0.9686 - val_recall_12: 0.9659 - val_f2_score: 0.9664
Epoch 45/100
50/50 [==============================] - 1s 21ms/step - loss: 0.1306 - accuracy: 0.9527 - precision_12: 0.9569 - recall_12: 0.9484 - f2_score: 0.9501 - val_loss: 0.0825 - val_accuracy: 0.9701 - val_precision_12: 0.9728 - val_recall_12: 0.9673 - val_f2_score: 0.9684
Epoch 46/100
50/50 [==============================] - 2s 33ms/step - loss: 0.1215 - accuracy: 0.9570 - precision_12: 0.9605 - recall_12: 0.9541 - f2_score: 0.9554 - val_loss: 0.0775 - val_accuracy: 0.9716 - val_precision_12: 0.9715 - val_recall_12: 0.9701 - val_f2_score: 0.9704
Epoch 47/100
50/50 [==============================] - 2s 30ms/step - loss: 0.1141 - accuracy: 0.9617 - precision_12: 0.9645 - recall_12: 0.9585 - f2_score: 0.9597 - val_loss: 0.0945 - val_accuracy: 0.9673 - val_precision_12: 0.9727 - val_recall_12: 0.9630 - val_f2_score: 0.9649
Epoch 48/100
50/50 [==============================] - 1s 23ms/step - loss: 0.1209 - accuracy: 0.9565 - precision_12: 0.9599 - recall_12: 0.9535 - f2_score: 0.9547 - val_loss: 0.0786 - val_accuracy: 0.9687 - val_precision_12: 0.9728 - val_recall_12: 0.9659 - val_f2_score: 0.9672
Epoch 49/100
50/50 [==============================] - 2s 36ms/step - loss: 0.1326 - accuracy: 0.9505 - precision_12: 0.9544 - recall_12: 0.9465 - f2_score: 0.9481 - val_loss: 0.0887 - val_accuracy: 0.9701 - val_precision_12: 0.9700 - val_recall_12: 0.9673 - val_f2_score: 0.9678
Epoch 50/100
50/50 [==============================] - 2s 32ms/step - loss: 0.1141 - accuracy: 0.9574 - precision_12: 0.9600 - recall_12: 0.9536 - f2_score: 0.9549 - val_loss: 0.0728 - val_accuracy: 0.9744 - val_precision_12: 0.9785 - val_recall_12: 0.9730 - val_f2_score: 0.9741
Epoch 51/100
50/50 [==============================] - 2s 33ms/step - loss: 0.1113 - accuracy: 0.9578 - precision_12: 0.9604 - recall_12: 0.9549 - f2_score: 0.9560 - val_loss: 0.0624 - val_accuracy: 0.9772 - val_precision_12: 0.9772 - val_recall_12: 0.9772 - val_f2_score: 0.9772
Epoch 52/100
50/50 [==============================] - 1s 22ms/step - loss: 0.1173 - accuracy: 0.9581 - precision_12: 0.9622 - recall_12: 0.9557 - f2_score: 0.9570 - val_loss: 0.0736 - val_accuracy: 0.9687 - val_precision_12: 0.9701 - val_recall_12: 0.9687 - val_f2_score: 0.9690
Epoch 53/100
50/50 [==============================] - 1s 29ms/step - loss: 0.1077 - accuracy: 0.9606 - precision_12: 0.9649 - recall_12: 0.9568 - f2_score: 0.9584 - val_loss: 0.0716 - val_accuracy: 0.9744 - val_precision_12: 0.9757 - val_recall_12: 0.9730 - val_f2_score: 0.9735
Epoch 54/100
50/50 [==============================] - 1s 21ms/step - loss: 0.1307 - accuracy: 0.9540 - precision_12: 0.9561 - recall_12: 0.9502 - f2_score: 0.9513 - val_loss: 0.0666 - val_accuracy: 0.9716 - val_precision_12: 0.9729 - val_recall_12: 0.9687 - val_f2_score: 0.9695
Epoch 55/100
50/50 [==============================] - 1s 21ms/step - loss: 0.1115 - accuracy: 0.9617 - precision_12: 0.9654 - recall_12: 0.9578 - f2_score: 0.9593 - val_loss: 0.0803 - val_accuracy: 0.9716 - val_precision_12: 0.9716 - val_recall_12: 0.9716 - val_f2_score: 0.9716
Epoch 56/100
50/50 [==============================] - 2s 33ms/step - loss: 0.1103 - accuracy: 0.9628 - precision_12: 0.9646 - recall_12: 0.9581 - f2_score: 0.9594 - val_loss: 0.0586 - val_accuracy: 0.9787 - val_precision_12: 0.9786 - val_recall_12: 0.9772 - val_f2_score: 0.9775
Epoch 57/100
50/50 [==============================] - 1s 22ms/step - loss: 0.1049 - accuracy: 0.9620 - precision_12: 0.9651 - recall_12: 0.9590 - f2_score: 0.9602 - val_loss: 0.0506 - val_accuracy: 0.9815 - val_precision_12: 0.9815 - val_recall_12: 0.9815 - val_f2_score: 0.9815
Epoch 58/100
50/50 [==============================] - 1s 25ms/step - loss: 0.0915 - accuracy: 0.9685 - precision_12: 0.9709 - recall_12: 0.9661 - f2_score: 0.9671 - val_loss: 0.0538 - val_accuracy: 0.9758 - val_precision_12: 0.9785 - val_recall_12: 0.9730 - val_f2_score: 0.9741
Epoch 59/100
50/50 [==============================] - 2s 31ms/step - loss: 0.0929 - accuracy: 0.9699 - precision_12: 0.9729 - recall_12: 0.9666 - f2_score: 0.9679 - val_loss: 0.0467 - val_accuracy: 0.9844 - val_precision_12: 0.9857 - val_recall_12: 0.9815 - val_f2_score: 0.9823
Epoch 60/100
50/50 [==============================] - 2s 45ms/step - loss: 0.0953 - accuracy: 0.9669 - precision_12: 0.9696 - recall_12: 0.9646 - f2_score: 0.9656 - val_loss: 0.0806 - val_accuracy: 0.9744 - val_precision_12: 0.9785 - val_recall_12: 0.9701 - val_f2_score: 0.9718
Epoch 61/100
50/50 [==============================] - 1s 22ms/step - loss: 0.1018 - accuracy: 0.9619 - precision_12: 0.9643 - recall_12: 0.9603 - f2_score: 0.9611 - val_loss: 0.0708 - val_accuracy: 0.9701 - val_precision_12: 0.9757 - val_recall_12: 0.9701 - val_f2_score: 0.9712
Epoch 62/100
50/50 [==============================] - 1s 25ms/step - loss: 0.1188 - accuracy: 0.9579 - precision_12: 0.9620 - recall_12: 0.9541 - f2_score: 0.9557 - val_loss: 0.0488 - val_accuracy: 0.9815 - val_precision_12: 0.9829 - val_recall_12: 0.9815 - val_f2_score: 0.9818
Epoch 63/100
50/50 [==============================] - 1s 25ms/step - loss: 0.0878 - accuracy: 0.9703 - precision_12: 0.9717 - recall_12: 0.9687 - f2_score: 0.9693 - val_loss: 0.0691 - val_accuracy: 0.9730 - val_precision_12: 0.9729 - val_recall_12: 0.9716 - val_f2_score: 0.9718
Epoch 64/100
50/50 [==============================] - 1s 27ms/step - loss: 0.0914 - accuracy: 0.9677 - precision_12: 0.9701 - recall_12: 0.9650 - f2_score: 0.9660 - val_loss: 0.0437 - val_accuracy: 0.9858 - val_precision_12: 0.9858 - val_recall_12: 0.9858 - val_f2_score: 0.9858
Epoch 65/100
50/50 [==============================] - 2s 43ms/step - loss: 0.0829 - accuracy: 0.9682 - precision_12: 0.9705 - recall_12: 0.9668 - f2_score: 0.9675 - val_loss: 0.0566 - val_accuracy: 0.9829 - val_precision_12: 0.9828 - val_recall_12: 0.9772 - val_f2_score: 0.9784
Epoch 66/100
50/50 [==============================] - 1s 22ms/step - loss: 0.0978 - accuracy: 0.9666 - precision_12: 0.9688 - recall_12: 0.9642 - f2_score: 0.9652 - val_loss: 0.0504 - val_accuracy: 0.9801 - val_precision_12: 0.9815 - val_recall_12: 0.9787 - val_f2_score: 0.9792
Epoch 67/100
50/50 [==============================] - 1s 21ms/step - loss: 0.0802 - accuracy: 0.9714 - precision_12: 0.9733 - recall_12: 0.9696 - f2_score: 0.9704 - val_loss: 0.0980 - val_accuracy: 0.9630 - val_precision_12: 0.9685 - val_recall_12: 0.9616 - val_f2_score: 0.9630
Epoch 68/100
50/50 [==============================] - 1s 21ms/step - loss: 0.1078 - accuracy: 0.9630 - precision_12: 0.9656 - recall_12: 0.9601 - f2_score: 0.9612 - val_loss: 0.0683 - val_accuracy: 0.9744 - val_precision_12: 0.9757 - val_recall_12: 0.9730 - val_f2_score: 0.9735
Epoch 69/100
50/50 [==============================] - 1s 22ms/step - loss: 0.0865 - accuracy: 0.9699 - precision_12: 0.9723 - recall_12: 0.9668 - f2_score: 0.9679 - val_loss: 0.0488 - val_accuracy: 0.9829 - val_precision_12: 0.9843 - val_recall_12: 0.9829 - val_f2_score: 0.9832
Epoch 70/100
50/50 [==============================] - 1s 21ms/step - loss: 0.0716 - accuracy: 0.9764 - precision_12: 0.9772 - recall_12: 0.9747 - f2_score: 0.9752 - val_loss: 0.0360 - val_accuracy: 0.9886 - val_precision_12: 0.9886 - val_recall_12: 0.9858 - val_f2_score: 0.9863
Epoch 71/100
50/50 [==============================] - 2s 40ms/step - loss: 0.1034 - accuracy: 0.9652 - precision_12: 0.9682 - recall_12: 0.9627 - f2_score: 0.9638 - val_loss: 0.0494 - val_accuracy: 0.9815 - val_precision_12: 0.9815 - val_recall_12: 0.9815 - val_f2_score: 0.9815
Epoch 72/100
50/50 [==============================] - 1s 25ms/step - loss: 0.0867 - accuracy: 0.9710 - precision_12: 0.9734 - recall_12: 0.9688 - f2_score: 0.9697 - val_loss: 0.0319 - val_accuracy: 0.9872 - val_precision_12: 0.9872 - val_recall_12: 0.9872 - val_f2_score: 0.9872
Epoch 73/100
50/50 [==============================] - 1s 23ms/step - loss: 0.0732 - accuracy: 0.9728 - precision_12: 0.9749 - recall_12: 0.9707 - f2_score: 0.9716 - val_loss: 0.0467 - val_accuracy: 0.9829 - val_precision_12: 0.9829 - val_recall_12: 0.9829 - val_f2_score: 0.9829
Epoch 74/100
50/50 [==============================] - 1s 22ms/step - loss: 0.0722 - accuracy: 0.9733 - precision_12: 0.9749 - recall_12: 0.9710 - f2_score: 0.9718 - val_loss: 0.0424 - val_accuracy: 0.9801 - val_precision_12: 0.9829 - val_recall_12: 0.9801 - val_f2_score: 0.9806
Epoch 75/100
50/50 [==============================] - 1s 24ms/step - loss: 0.0831 - accuracy: 0.9696 - precision_12: 0.9717 - recall_12: 0.9679 - f2_score: 0.9686 - val_loss: 0.0434 - val_accuracy: 0.9858 - val_precision_12: 0.9858 - val_recall_12: 0.9844 - val_f2_score: 0.9846
Epoch 76/100
50/50 [==============================] - 2s 37ms/step - loss: 0.0681 - accuracy: 0.9756 - precision_12: 0.9770 - recall_12: 0.9734 - f2_score: 0.9741 - val_loss: 0.0607 - val_accuracy: 0.9758 - val_precision_12: 0.9758 - val_recall_12: 0.9744 - val_f2_score: 0.9747
Epoch 77/100
50/50 [==============================] - 1s 22ms/step - loss: 0.0810 - accuracy: 0.9720 - precision_12: 0.9736 - recall_12: 0.9695 - f2_score: 0.9703 - val_loss: 0.0697 - val_accuracy: 0.9787 - val_precision_12: 0.9800 - val_recall_12: 0.9772 - val_f2_score: 0.9778
Epoch 78/100
50/50 [==============================] - 1s 22ms/step - loss: 0.0829 - accuracy: 0.9685 - precision_12: 0.9705 - recall_12: 0.9668 - f2_score: 0.9675 - val_loss: 0.0338 - val_accuracy: 0.9858 - val_precision_12: 0.9872 - val_recall_12: 0.9844 - val_f2_score: 0.9849
Epoch 79/100
50/50 [==============================] - 1s 21ms/step - loss: 0.0735 - accuracy: 0.9763 - precision_12: 0.9776 - recall_12: 0.9744 - f2_score: 0.9750 - val_loss: 0.0422 - val_accuracy: 0.9844 - val_precision_12: 0.9843 - val_recall_12: 0.9829 - val_f2_score: 0.9832
Epoch 80/100
50/50 [==============================] - 1s 24ms/step - loss: 0.0672 - accuracy: 0.9756 - precision_12: 0.9767 - recall_12: 0.9745 - f2_score: 0.9750 - val_loss: 0.0325 - val_accuracy: 0.9943 - val_precision_12: 0.9943 - val_recall_12: 0.9915 - val_f2_score: 0.9920
Epoch 81/100
50/50 [==============================] - 2s 33ms/step - loss: 0.0681 - accuracy: 0.9752 - precision_12: 0.9762 - recall_12: 0.9739 - f2_score: 0.9744 - val_loss: 0.0394 - val_accuracy: 0.9886 - val_precision_12: 0.9886 - val_recall_12: 0.9872 - val_f2_score: 0.9875
Epoch 82/100
50/50 [==============================] - 1s 25ms/step - loss: 0.0731 - accuracy: 0.9756 - precision_12: 0.9770 - recall_12: 0.9745 - f2_score: 0.9750 - val_loss: 0.0590 - val_accuracy: 0.9772 - val_precision_12: 0.9772 - val_recall_12: 0.9772 - val_f2_score: 0.9772
Epoch 83/100
50/50 [==============================] - 1s 21ms/step - loss: 0.0846 - accuracy: 0.9698 - precision_12: 0.9720 - recall_12: 0.9682 - f2_score: 0.9690 - val_loss: 0.0250 - val_accuracy: 0.9900 - val_precision_12: 0.9900 - val_recall_12: 0.9900 - val_f2_score: 0.9900
Epoch 84/100
50/50 [==============================] - 1s 23ms/step - loss: 0.0742 - accuracy: 0.9737 - precision_12: 0.9755 - recall_12: 0.9715 - f2_score: 0.9723 - val_loss: 0.0399 - val_accuracy: 0.9858 - val_precision_12: 0.9858 - val_recall_12: 0.9844 - val_f2_score: 0.9846
Epoch 85/100
50/50 [==============================] - 1s 23ms/step - loss: 0.0678 - accuracy: 0.9763 - precision_12: 0.9773 - recall_12: 0.9753 - f2_score: 0.9757 - val_loss: 0.0254 - val_accuracy: 0.9957 - val_precision_12: 0.9957 - val_recall_12: 0.9957 - val_f2_score: 0.9957
Epoch 86/100
50/50 [==============================] - 1s 22ms/step - loss: 0.0708 - accuracy: 0.9756 - precision_12: 0.9773 - recall_12: 0.9750 - f2_score: 0.9755 - val_loss: 0.0633 - val_accuracy: 0.9787 - val_precision_12: 0.9800 - val_recall_12: 0.9758 - val_f2_score: 0.9767
Epoch 87/100
50/50 [==============================] - 2s 40ms/step - loss: 0.0700 - accuracy: 0.9739 - precision_12: 0.9756 - recall_12: 0.9729 - f2_score: 0.9735 - val_loss: 0.0504 - val_accuracy: 0.9772 - val_precision_12: 0.9786 - val_recall_12: 0.9772 - val_f2_score: 0.9775
Epoch 88/100
50/50 [==============================] - 1s 24ms/step - loss: 0.0685 - accuracy: 0.9755 - precision_12: 0.9774 - recall_12: 0.9728 - f2_score: 0.9737 - val_loss: 0.0394 - val_accuracy: 0.9900 - val_precision_12: 0.9900 - val_recall_12: 0.9886 - val_f2_score: 0.9889
Epoch 89/100
50/50 [==============================] - 1s 21ms/step - loss: 0.0594 - accuracy: 0.9793 - precision_12: 0.9807 - recall_12: 0.9783 - f2_score: 0.9788 - val_loss: 0.0226 - val_accuracy: 0.9929 - val_precision_12: 0.9929 - val_recall_12: 0.9929 - val_f2_score: 0.9929
Epoch 90/100
50/50 [==============================] - 1s 23ms/step - loss: 0.0713 - accuracy: 0.9739 - precision_12: 0.9757 - recall_12: 0.9718 - f2_score: 0.9726 - val_loss: 0.0527 - val_accuracy: 0.9772 - val_precision_12: 0.9772 - val_recall_12: 0.9772 - val_f2_score: 0.9772
Epoch 91/100
50/50 [==============================] - 1s 21ms/step - loss: 0.0650 - accuracy: 0.9755 - precision_12: 0.9767 - recall_12: 0.9744 - f2_score: 0.9748 - val_loss: 0.0378 - val_accuracy: 0.9844 - val_precision_12: 0.9858 - val_recall_12: 0.9844 - val_f2_score: 0.9846
Epoch 92/100
50/50 [==============================] - 2s 36ms/step - loss: 0.0781 - accuracy: 0.9742 - precision_12: 0.9759 - recall_12: 0.9731 - f2_score: 0.9737 - val_loss: 0.0552 - val_accuracy: 0.9787 - val_precision_12: 0.9787 - val_recall_12: 0.9787 - val_f2_score: 0.9787
Epoch 93/100
50/50 [==============================] - 1s 22ms/step - loss: 0.0635 - accuracy: 0.9785 - precision_12: 0.9792 - recall_12: 0.9774 - f2_score: 0.9777 - val_loss: 0.0252 - val_accuracy: 0.9929 - val_precision_12: 0.9943 - val_recall_12: 0.9915 - val_f2_score: 0.9920
Epoch 94/100
50/50 [==============================] - 1s 21ms/step - loss: 0.0551 - accuracy: 0.9821 - precision_12: 0.9833 - recall_12: 0.9807 - f2_score: 0.9812 - val_loss: 0.0492 - val_accuracy: 0.9787 - val_precision_12: 0.9787 - val_recall_12: 0.9787 - val_f2_score: 0.9787
Epoch 95/100
50/50 [==============================] - 1s 21ms/step - loss: 0.0615 - accuracy: 0.9799 - precision_12: 0.9813 - recall_12: 0.9782 - f2_score: 0.9788 - val_loss: 0.0397 - val_accuracy: 0.9844 - val_precision_12: 0.9844 - val_recall_12: 0.9844 - val_f2_score: 0.9844
Epoch 96/100
50/50 [==============================] - 1s 21ms/step - loss: 0.0575 - accuracy: 0.9790 - precision_12: 0.9805 - recall_12: 0.9780 - f2_score: 0.9785 - val_loss: 0.0491 - val_accuracy: 0.9815 - val_precision_12: 0.9829 - val_recall_12: 0.9787 - val_f2_score: 0.9795
Epoch 97/100
50/50 [==============================] - 1s 21ms/step - loss: 0.0591 - accuracy: 0.9774 - precision_12: 0.9781 - recall_12: 0.9756 - f2_score: 0.9761 - val_loss: 0.0339 - val_accuracy: 0.9872 - val_precision_12: 0.9872 - val_recall_12: 0.9858 - val_f2_score: 0.9861
Epoch 98/100
50/50 [==============================] - 2s 37ms/step - loss: 0.0623 - accuracy: 0.9766 - precision_12: 0.9784 - recall_12: 0.9758 - f2_score: 0.9763 - val_loss: 0.0466 - val_accuracy: 0.9801 - val_precision_12: 0.9801 - val_recall_12: 0.9787 - val_f2_score: 0.9789
Epoch 99/100
50/50 [==============================] - 1s 21ms/step - loss: 0.0684 - accuracy: 0.9739 - precision_12: 0.9753 - recall_12: 0.9733 - f2_score: 0.9737 - val_loss: 0.0308 - val_accuracy: 0.9844 - val_precision_12: 0.9844 - val_recall_12: 0.9844 - val_f2_score: 0.9844
Epoch 100/100
50/50 [==============================] - 1s 21ms/step - loss: 0.0543 - accuracy: 0.9812 - precision_12: 0.9825 - recall_12: 0.9797 - f2_score: 0.9803 - val_loss: 0.0427 - val_accuracy: 0.9872 - val_precision_12: 0.9872 - val_recall_12: 0.9872 - val_f2_score: 0.9872
In [ ]:
# Plot Training F2 score
plt.figure(figsize = (10, 4))
plt.subplot(1, 2, 1)
plt.plot(H2C.history['f2_score'], label = 'training')

# Plot Val F2 score
plt.ylabel('Accuracy %')
plt.title('Training')
plt.plot(H2C.history['val_f2_score'], label = 'validation')
plt.title('F2 Score')
plt.legend()

# Plot Training Loss
plt.subplot(1, 2, 2)
plt.plot(H2C.history['loss'], label = 'training')
plt.ylabel('Training Loss')
plt.xlabel('epochs')

# Plot Validation Loss
plt.plot(H2C.history['val_loss'], label = 'validation')
plt.xlabel('epochs')
plt.title('Loss')
plt.legend()
plt.show()

# After training, predict classes on the test set
y_pred = Conv2C.predict(X_test)
y_pred_classes = np.argmax(y_pred, axis=1)
y_true_classes = np.argmax(y_test_encoded, axis=1)

# Generate the confusion matrix
cm = confusion_matrix(y_true_classes, y_pred_classes)

# Plotting the confusion matrix
fig, ax = plt.subplots(figsize=(8, 8))
ax.matshow(cm, cmap=plt.cm.Blues, alpha=0.3)
for i in range(cm.shape[0]):
    for j in range(cm.shape[1]):
        ax.text(x=j, y=i, s=cm[i, j], va='center', ha='center')

plt.xlabel('Predicted labels')
plt.ylabel('True labels')
plt.title('Confusion Matrix')
plt.show()

# Plot ROC and calculate AUC (CNN)
plot_roc_curve(y_test, y_pred, NUM_CLASSES, 'Conv2C')

# Print the scores
print(f"Validation Scores:\n\tF2 Score: {H2C.history['val_f2_score']}\n\tRecall: {H2C.history['val_recall_12']}")
print(f"\tPrecision: {H2C.history['val_precision_12']}\n\tAccuracy: {H2C.history['val_accuracy']}")
No description has been provided for this image
22/22 [==============================] - 0s 4ms/step
No description has been provided for this image
No description has been provided for this image
Validation Scores:
	F2 Score: [0.4281681180000305, 0.6021409034729004, 0.6496350169181824, 0.7128339409828186, 0.7244071364402771, 0.7481115460395813, 0.7654392719268799, 0.8041355609893799, 0.7571802139282227, 0.824727475643158, 0.8261369466781616, 0.8659058809280396, 0.8519473075866699, 0.8883176445960999, 0.8833619356155396, 0.895052969455719, 0.9254499077796936, 0.8922944664955139, 0.8969956636428833, 0.9087018370628357, 0.9211652874946594, 0.9227039217948914, 0.9251855611801147, 0.9155732989311218, 0.9299344420433044, 0.9376785755157471, 0.9252923130989075, 0.9432886838912964, 0.9441754221916199, 0.9501424431800842, 0.9400856494903564, 0.946753978729248, 0.9555681347846985, 0.9532630443572998, 0.9601140022277832, 0.9399032592773438, 0.9430199861526489, 0.9672830700874329, 0.972657322883606, 0.9384791254997253, 0.9592939615249634, 0.9638382792472839, 0.9569923877716064, 0.9664105772972107, 0.9683850407600403, 0.9704041481018066, 0.9649373292922974, 0.9672363996505737, 0.9678336977958679, 0.9740815162658691, 0.9772403836250305, 0.9689812064170837, 0.9735268354415894, 0.9695330262184143, 0.9715505838394165, 0.9775184988975525, 0.9815077781677246, 0.9740815162658691, 0.98234623670578, 0.9717867970466614, 0.9712331891059875, 0.9817870855331421, 0.9718270897865295, 0.9857752323150635, 0.97835373878479, 0.9792199730873108, 0.9629628658294678, 0.9735268354415894, 0.9832099676132202, 0.9863364696502686, 0.9815077781677246, 0.9871976971626282, 0.9829303622245789, 0.9806434512138367, 0.9846329092979431, 0.974672794342041, 0.9777966737747192, 0.984913170337677, 0.9832099676132202, 0.9920295476913452, 0.9874786138534546, 0.9772403836250305, 0.9900427460670471, 0.9846329092979431, 0.9957324862480164, 0.9766515493392944, 0.9775184988975525, 0.9889014959335327, 0.9928876757621765, 0.9772403836250305, 0.9846329092979431, 0.97866290807724, 0.9920295476913452, 0.97866290807724, 0.9843527674674988, 0.9794988036155701, 0.9860557913780212, 0.9789413213729858, 0.9843527674674988, 0.9871976971626282]
	Recall: [0.3883357048034668, 0.5761024355888367, 0.633001446723938, 0.6984353065490723, 0.712660014629364, 0.7325747013092041, 0.7510668635368347, 0.7965860366821289, 0.7425320148468018, 0.8179231882095337, 0.816500723361969, 0.8634423613548279, 0.846372663974762, 0.8847795128822327, 0.8790895938873291, 0.8904694318771362, 0.9217638969421387, 0.8862019777297974, 0.8918918967247009, 0.9061166644096375, 0.9174964427947998, 0.9203413724899292, 0.9217638969421387, 0.9132290482521057, 0.9288762211799622, 0.933143675327301, 0.9231863617897034, 0.941678524017334, 0.9431009888648987, 0.9487909078598022, 0.9374110698699951, 0.9459459185600281, 0.954480767250061, 0.9516358375549316, 0.9587482213973999, 0.9388335943222046, 0.941678524017334, 0.9672830700874329, 0.9715505242347717, 0.9374110698699951, 0.9587482213973999, 0.9630156755447388, 0.9559032917022705, 0.9658606052398682, 0.9672830700874329, 0.9701279997825623, 0.9630156755447388, 0.9658606052398682, 0.9672830700874329, 0.9729729890823364, 0.9772403836250305, 0.9687055349349976, 0.9729729890823364, 0.9687055349349976, 0.9715505242347717, 0.9772403836250305, 0.9815078377723694, 0.9729729890823364, 0.9815078377723694, 0.9701279997825623, 0.9701279997825623, 0.9815078377723694, 0.9715505242347717, 0.9857752323150635, 0.9772403836250305, 0.9786628484725952, 0.9615931510925293, 0.9729729890823364, 0.9829303026199341, 0.9857752323150635, 0.9815078377723694, 0.9871976971626282, 0.9829303026199341, 0.9800853729248047, 0.9843527674674988, 0.9743954539299011, 0.9772403836250305, 0.9843527674674988, 0.9829303026199341, 0.991465151309967, 0.9871976971626282, 0.9772403836250305, 0.9900426864624023, 0.9843527674674988, 0.9957325458526611, 0.9758179187774658, 0.9772403836250305, 0.9886202216148376, 0.9928876161575317, 0.9772403836250305, 0.9843527674674988, 0.9786628484725952, 0.991465151309967, 0.9786628484725952, 0.9843527674674988, 0.9786628484725952, 0.9857752323150635, 0.9786628484725952, 0.9843527674674988, 0.9871976971626282]
	Precision: [0.7260638475418091, 0.7350271940231323, 0.7259380221366882, 0.7768987417221069, 0.7755417823791504, 0.817460298538208, 0.8288853764533997, 0.8358209133148193, 0.822047233581543, 0.8531157374382019, 0.8670694828033447, 0.8759018778800964, 0.875, 0.9027576446533203, 0.9008746147155762, 0.9138686060905457, 0.9404934644699097, 0.9175257682800293, 0.9180088043212891, 0.9191918969154358, 0.9361393451690674, 0.9322766661643982, 0.939130425453186, 0.9250720739364624, 0.9341917037963867, 0.9562682509422302, 0.9338129758834839, 0.9497848153114319, 0.9484978318214417, 0.9555873870849609, 0.9509379267692566, 0.949999988079071, 0.9599427580833435, 0.9598278403282166, 0.9656160473823547, 0.9442059993743896, 0.9484240412712097, 0.9672830700874329, 0.9771101474761963, 0.9427753686904907, 0.9614835977554321, 0.9671428799629211, 0.9613733887672424, 0.968616247177124, 0.9728183150291443, 0.9715099930763245, 0.9727011322975159, 0.972779393196106, 0.9700428247451782, 0.9785407781600952, 0.9772403836250305, 0.9700854420661926, 0.9757489562034607, 0.9728571176528931, 0.9715505242347717, 0.9786324501037598, 0.9815078377723694, 0.9785407781600952, 0.9857142567634583, 0.9784792065620422, 0.9756795167922974, 0.9829059839248657, 0.9729344844818115, 0.9857752323150635, 0.9828326106071472, 0.9814550876617432, 0.9684813618659973, 0.9757489562034607, 0.9843304753303528, 0.9885877370834351, 0.9815078377723694, 0.9871976971626282, 0.9829303026199341, 0.9828816056251526, 0.9857549667358398, 0.9757834672927856, 0.980028510093689, 0.9871612191200256, 0.9843304753303528, 0.9942938685417175, 0.9886040091514587, 0.9772403836250305, 0.9900426864624023, 0.9857549667358398, 0.9957325458526611, 0.9800000190734863, 0.9786324501037598, 0.9900285005569458, 0.9928876161575317, 0.9772403836250305, 0.9857549667358398, 0.9786628484725952, 0.9942938685417175, 0.9786628484725952, 0.9843527674674988, 0.9828571677207947, 0.9871794581413269, 0.9800570011138916, 0.9843527674674988, 0.9871976971626282]
	Accuracy: [0.5803698301315308, 0.6671408414840698, 0.6856330037117004, 0.7510668635368347, 0.7325747013092041, 0.7866287231445312, 0.7951635718345642, 0.8150782585144043, 0.7980085611343384, 0.8349928855895996, 0.8406828045845032, 0.8705547451972961, 0.8577525019645691, 0.896159291267395, 0.8819345831871033, 0.8975818157196045, 0.9302987456321716, 0.8990042805671692, 0.9075391292572021, 0.9160739779472351, 0.9288762211799622, 0.9274537563323975, 0.9274537563323975, 0.9174964427947998, 0.9317212104797363, 0.9487909078598022, 0.9288762211799622, 0.9473684430122375, 0.9459459185600281, 0.9516358375549316, 0.9431009888648987, 0.9473684430122375, 0.9559032917022705, 0.9587482213973999, 0.9630156755447388, 0.941678524017334, 0.9445234537124634, 0.9672830700874329, 0.9772403836250305, 0.9431009888648987, 0.9587482213973999, 0.9644381403923035, 0.9587482213973999, 0.9658606052398682, 0.9701279997825623, 0.9715505242347717, 0.9672830700874329, 0.9687055349349976, 0.9701279997825623, 0.9743954539299011, 0.9772403836250305, 0.9687055349349976, 0.9743954539299011, 0.9715505242347717, 0.9715505242347717, 0.9786628484725952, 0.9815078377723694, 0.9758179187774658, 0.9843527674674988, 0.9743954539299011, 0.9701279997825623, 0.9815078377723694, 0.9729729890823364, 0.9857752323150635, 0.9829303026199341, 0.9800853729248047, 0.9630156755447388, 0.9743954539299011, 0.9829303026199341, 0.9886202216148376, 0.9815078377723694, 0.9871976971626282, 0.9829303026199341, 0.9800853729248047, 0.9857752323150635, 0.9758179187774658, 0.9786628484725952, 0.9857752323150635, 0.9843527674674988, 0.9943100810050964, 0.9886202216148376, 0.9772403836250305, 0.9900426864624023, 0.9857752323150635, 0.9957325458526611, 0.9786628484725952, 0.9772403836250305, 0.9900426864624023, 0.9928876161575317, 0.9772403836250305, 0.9843527674674988, 0.9786628484725952, 0.9928876161575317, 0.9786628484725952, 0.9843527674674988, 0.9815078377723694, 0.9871976971626282, 0.9800853729248047, 0.9843527674674988, 0.9871976971626282]
  • F2 Score: 0.9872
  • Recall: 0.9872
  • Precision: 0.9872
  • Accuracy: 0.9872

The third model, which includes dropout layers, demonstrates even further improved performance compared to the earlier models, as reflected in the confusion matrix and the identical F2 score, recall, precision, and accuracy of 0.9872. From the confusion matrix:

  • Glioma (0): The third model has 156 true positives and 4 false negatives, which is identical to the second model's performance for Glioma, suggesting consistent robustness in this category.

  • Meningioma (1): The third model shows a substantial improvement, with 163 true positives and no false negatives, compared to the second model, which had 161 true positives and 1 false negative. This indicates enhanced sensitivity and reliability for Meningioma classification.

  • No Tumor (2): Both the third and the second models display near-perfect performance, with the third model having 199 true positives and just 1 false negative, compared to the perfect score in the second model. This suggests that both models are highly effective at identifying 'No Tumor' cases with a very low rate of false negatives.

  • Pituitary (3): The third model shows an improvement with 176 true positives and no false negatives, compared to the second model, which had 175 true positives and 1 false negative.

The F2 score of 0.9872 for the third model is slightly higher than the 0.9858 of the second model, indicating that with the addition of dropout layers, the model has become even more effective at classifying tumors with an emphasis on minimizing false negatives. This improvement is critical in medical diagnostics, where the cost of missing a true case (false negative) can have serious consequences.

The consistency of the F2 score, recall, precision, and accuracy at 0.9872 suggests a highly balanced model where the added dropout has likely helped the model prevent overfitting and improved its generalization on unseen data. This is further corroborated by the reduction in false negatives across all categories, which is essential for a medical diagnostic tool where the emphasis is on correctly identifying all positive cases.

In [ ]:
# Regularization
lam = 0.0001

# Grayscale Input
input_shape = (IMG_SIZE, IMG_SIZE, 1)

# Create Standard MLP Architecture
mlp = Sequential()
mlp.add(Flatten(input_shape = input_shape))
mlp.add(Dense(units = 32, activation = 'relu', kernel_regularizer = l2(lam)))
mlp.add(Dense(units = 16, activation = 'relu', kernel_regularizer = l2(lam)))
mlp.add(Dense(NUM_CLASSES, activation = 'softmax'))


# Compile MLP Model
mlp.compile(loss = 'categorical_crossentropy',
            optimizer = 'adam',
            metrics = ['accuracy', Precision(), Recall(), F2Score()])

# Fit MLP Model
mlp_history = mlp.fit(X_train, y_train_encoded, 
                      batch_size = 32, epochs = 150, 
                      shuffle = True, verbose = 1,
                      validation_data = (X_test, y_test_encoded))
Epoch 1/150
198/198 [==============================] - 3s 11ms/step - loss: 1.2337 - accuracy: 0.4231 - precision_16: 0.8312 - recall_16: 0.1823 - f2_score: 0.2160 - val_loss: 1.1169 - val_accuracy: 0.4879 - val_precision_16: 0.8984 - val_recall_16: 0.2390 - val_f2_score: 0.2801
Epoch 2/150
198/198 [==============================] - 1s 6ms/step - loss: 1.0982 - accuracy: 0.5122 - precision_16: 0.8920 - recall_16: 0.2392 - f2_score: 0.2803 - val_loss: 1.0636 - val_accuracy: 0.5292 - val_precision_16: 0.9218 - val_recall_16: 0.2347 - val_f2_score: 0.2758
Epoch 3/150
198/198 [==============================] - 1s 6ms/step - loss: 1.0215 - accuracy: 0.5714 - precision_16: 0.8875 - recall_16: 0.2521 - f2_score: 0.2942 - val_loss: 0.9892 - val_accuracy: 0.5619 - val_precision_16: 0.8584 - val_recall_16: 0.2674 - val_f2_score: 0.3101
Epoch 4/150
198/198 [==============================] - 1s 6ms/step - loss: 0.8421 - accuracy: 0.6671 - precision_16: 0.9044 - recall_16: 0.3441 - f2_score: 0.3928 - val_loss: 0.7231 - val_accuracy: 0.6956 - val_precision_16: 0.8555 - val_recall_16: 0.4296 - val_f2_score: 0.4771
Epoch 5/150
198/198 [==============================] - 1s 5ms/step - loss: 0.7313 - accuracy: 0.6889 - precision_16: 0.8095 - recall_16: 0.4747 - f2_score: 0.5175 - val_loss: 0.6989 - val_accuracy: 0.7013 - val_precision_16: 0.7881 - val_recall_16: 0.5661 - val_f2_score: 0.5999
Epoch 6/150
198/198 [==============================] - 1s 5ms/step - loss: 0.6321 - accuracy: 0.7498 - precision_16: 0.8166 - recall_16: 0.6100 - f2_score: 0.6425 - val_loss: 0.6229 - val_accuracy: 0.7525 - val_precision_16: 0.8346 - val_recall_16: 0.6316 - val_f2_score: 0.6639
Epoch 7/150
198/198 [==============================] - 2s 8ms/step - loss: 0.5645 - accuracy: 0.7847 - precision_16: 0.8317 - recall_16: 0.6941 - f2_score: 0.7179 - val_loss: 0.5893 - val_accuracy: 0.7411 - val_precision_16: 0.7827 - val_recall_16: 0.6814 - val_f2_score: 0.6995
Epoch 8/150
198/198 [==============================] - 1s 6ms/step - loss: 0.5339 - accuracy: 0.7951 - precision_16: 0.8372 - recall_16: 0.7324 - f2_score: 0.7512 - val_loss: 0.5674 - val_accuracy: 0.7795 - val_precision_16: 0.8135 - val_recall_16: 0.7383 - val_f2_score: 0.7522
Epoch 9/150
198/198 [==============================] - 1s 5ms/step - loss: 0.5123 - accuracy: 0.8066 - precision_16: 0.8387 - recall_16: 0.7628 - f2_score: 0.7769 - val_loss: 0.5976 - val_accuracy: 0.7639 - val_precision_16: 0.8019 - val_recall_16: 0.7255 - val_f2_score: 0.7396
Epoch 10/150
198/198 [==============================] - 1s 5ms/step - loss: 0.4836 - accuracy: 0.8267 - precision_16: 0.8499 - recall_16: 0.7918 - f2_score: 0.8027 - val_loss: 0.5352 - val_accuracy: 0.8065 - val_precision_16: 0.8285 - val_recall_16: 0.7696 - val_f2_score: 0.7807
Epoch 11/150
198/198 [==============================] - 1s 5ms/step - loss: 0.4239 - accuracy: 0.8579 - precision_16: 0.8780 - recall_16: 0.8315 - f2_score: 0.8404 - val_loss: 0.4995 - val_accuracy: 0.8179 - val_precision_16: 0.8368 - val_recall_16: 0.7952 - val_f2_score: 0.8032
Epoch 12/150
198/198 [==============================] - 2s 9ms/step - loss: 0.4237 - accuracy: 0.8587 - precision_16: 0.8769 - recall_16: 0.8384 - f2_score: 0.8459 - val_loss: 0.5377 - val_accuracy: 0.7909 - val_precision_16: 0.8072 - val_recall_16: 0.7624 - val_f2_score: 0.7710
Epoch 13/150
198/198 [==============================] - 1s 6ms/step - loss: 0.3856 - accuracy: 0.8782 - precision_16: 0.8906 - recall_16: 0.8592 - f2_score: 0.8653 - val_loss: 0.5005 - val_accuracy: 0.8364 - val_precision_16: 0.8550 - val_recall_16: 0.8137 - val_f2_score: 0.8216
Epoch 14/150
198/198 [==============================] - 1s 5ms/step - loss: 0.3681 - accuracy: 0.8877 - precision_16: 0.8967 - recall_16: 0.8736 - f2_score: 0.8781 - val_loss: 0.4740 - val_accuracy: 0.8151 - val_precision_16: 0.8401 - val_recall_16: 0.7994 - val_f2_score: 0.8072
Epoch 15/150
198/198 [==============================] - 1s 5ms/step - loss: 0.3460 - accuracy: 0.8976 - precision_16: 0.9064 - recall_16: 0.8842 - f2_score: 0.8885 - val_loss: 0.4840 - val_accuracy: 0.8293 - val_precision_16: 0.8365 - val_recall_16: 0.8151 - val_f2_score: 0.8193
Epoch 16/150
198/198 [==============================] - 1s 5ms/step - loss: 0.3618 - accuracy: 0.8884 - precision_16: 0.8975 - recall_16: 0.8786 - f2_score: 0.8824 - val_loss: 0.4609 - val_accuracy: 0.8563 - val_precision_16: 0.8717 - val_recall_16: 0.8407 - val_f2_score: 0.8467
Epoch 17/150
198/198 [==============================] - 1s 5ms/step - loss: 0.3505 - accuracy: 0.8935 - precision_16: 0.9028 - recall_16: 0.8875 - f2_score: 0.8905 - val_loss: 0.4725 - val_accuracy: 0.8478 - val_precision_16: 0.8543 - val_recall_16: 0.8421 - val_f2_score: 0.8445
Epoch 18/150
198/198 [==============================] - 2s 11ms/step - loss: 0.2922 - accuracy: 0.9258 - precision_16: 0.9292 - recall_16: 0.9195 - f2_score: 0.9214 - val_loss: 0.5134 - val_accuracy: 0.8208 - val_precision_16: 0.8258 - val_recall_16: 0.8094 - val_f2_score: 0.8126
Epoch 19/150
198/198 [==============================] - 1s 5ms/step - loss: 0.2997 - accuracy: 0.9163 - precision_16: 0.9196 - recall_16: 0.9116 - f2_score: 0.9131 - val_loss: 0.4826 - val_accuracy: 0.8563 - val_precision_16: 0.8617 - val_recall_16: 0.8506 - val_f2_score: 0.8528
Epoch 20/150
198/198 [==============================] - 1s 6ms/step - loss: 0.2898 - accuracy: 0.9225 - precision_16: 0.9257 - recall_16: 0.9193 - f2_score: 0.9206 - val_loss: 1.1272 - val_accuracy: 0.6486 - val_precision_16: 0.6584 - val_recall_16: 0.6444 - val_f2_score: 0.6471
Epoch 21/150
198/198 [==============================] - 1s 6ms/step - loss: 0.3431 - accuracy: 0.9079 - precision_16: 0.9116 - recall_16: 0.9025 - f2_score: 0.9043 - val_loss: 0.4855 - val_accuracy: 0.8478 - val_precision_16: 0.8534 - val_recall_16: 0.8450 - val_f2_score: 0.8466
Epoch 22/150
198/198 [==============================] - 1s 6ms/step - loss: 0.2723 - accuracy: 0.9307 - precision_16: 0.9341 - recall_16: 0.9282 - f2_score: 0.9293 - val_loss: 0.4926 - val_accuracy: 0.8478 - val_precision_16: 0.8512 - val_recall_16: 0.8378 - val_f2_score: 0.8405
Epoch 23/150
198/198 [==============================] - 2s 9ms/step - loss: 0.2666 - accuracy: 0.9315 - precision_16: 0.9348 - recall_16: 0.9283 - f2_score: 0.9296 - val_loss: 0.5843 - val_accuracy: 0.8321 - val_precision_16: 0.8504 - val_recall_16: 0.8250 - val_f2_score: 0.8300
Epoch 24/150
198/198 [==============================] - 1s 5ms/step - loss: 0.2358 - accuracy: 0.9487 - precision_16: 0.9507 - recall_16: 0.9460 - f2_score: 0.9470 - val_loss: 0.5068 - val_accuracy: 0.8478 - val_precision_16: 0.8530 - val_recall_16: 0.8421 - val_f2_score: 0.8443
Epoch 25/150
198/198 [==============================] - 1s 5ms/step - loss: 0.2141 - accuracy: 0.9562 - precision_16: 0.9574 - recall_16: 0.9521 - f2_score: 0.9531 - val_loss: 0.5005 - val_accuracy: 0.8606 - val_precision_16: 0.8635 - val_recall_16: 0.8549 - val_f2_score: 0.8566
Epoch 26/150
198/198 [==============================] - 1s 6ms/step - loss: 0.2394 - accuracy: 0.9453 - precision_16: 0.9469 - recall_16: 0.9427 - f2_score: 0.9436 - val_loss: 0.6749 - val_accuracy: 0.7881 - val_precision_16: 0.7928 - val_recall_16: 0.7838 - val_f2_score: 0.7856
Epoch 27/150
198/198 [==============================] - 1s 5ms/step - loss: 0.2377 - accuracy: 0.9434 - precision_16: 0.9450 - recall_16: 0.9411 - f2_score: 0.9419 - val_loss: 0.5228 - val_accuracy: 0.8435 - val_precision_16: 0.8528 - val_recall_16: 0.8407 - val_f2_score: 0.8431
Epoch 28/150
198/198 [==============================] - 1s 5ms/step - loss: 0.2346 - accuracy: 0.9445 - precision_16: 0.9464 - recall_16: 0.9424 - f2_score: 0.9432 - val_loss: 0.4810 - val_accuracy: 0.8649 - val_precision_16: 0.8694 - val_recall_16: 0.8620 - val_f2_score: 0.8635
Epoch 29/150
198/198 [==============================] - 2s 9ms/step - loss: 0.2683 - accuracy: 0.9340 - precision_16: 0.9364 - recall_16: 0.9318 - f2_score: 0.9327 - val_loss: 0.5636 - val_accuracy: 0.8293 - val_precision_16: 0.8324 - val_recall_16: 0.8265 - val_f2_score: 0.8276
Epoch 30/150
198/198 [==============================] - 1s 5ms/step - loss: 0.2176 - accuracy: 0.9570 - precision_16: 0.9581 - recall_16: 0.9549 - f2_score: 0.9555 - val_loss: 0.5691 - val_accuracy: 0.8364 - val_precision_16: 0.8442 - val_recall_16: 0.8321 - val_f2_score: 0.8345
Epoch 31/150
198/198 [==============================] - 1s 6ms/step - loss: 0.1988 - accuracy: 0.9650 - precision_16: 0.9662 - recall_16: 0.9638 - f2_score: 0.9643 - val_loss: 0.4846 - val_accuracy: 0.8862 - val_precision_16: 0.8862 - val_recall_16: 0.8862 - val_f2_score: 0.8862
Epoch 32/150
198/198 [==============================] - 1s 5ms/step - loss: 0.2055 - accuracy: 0.9592 - precision_16: 0.9601 - recall_16: 0.9582 - f2_score: 0.9586 - val_loss: 0.4979 - val_accuracy: 0.8649 - val_precision_16: 0.8696 - val_recall_16: 0.8634 - val_f2_score: 0.8647
Epoch 33/150
198/198 [==============================] - 1s 5ms/step - loss: 0.2294 - accuracy: 0.9509 - precision_16: 0.9524 - recall_16: 0.9491 - f2_score: 0.9497 - val_loss: 0.6107 - val_accuracy: 0.8578 - val_precision_16: 0.8627 - val_recall_16: 0.8578 - val_f2_score: 0.8587
Epoch 34/150
198/198 [==============================] - 2s 10ms/step - loss: 0.1947 - accuracy: 0.9627 - precision_16: 0.9643 - recall_16: 0.9616 - f2_score: 0.9621 - val_loss: 0.4930 - val_accuracy: 0.8805 - val_precision_16: 0.8816 - val_recall_16: 0.8791 - val_f2_score: 0.8796
Epoch 35/150
198/198 [==============================] - 1s 7ms/step - loss: 0.1980 - accuracy: 0.9631 - precision_16: 0.9640 - recall_16: 0.9619 - f2_score: 0.9623 - val_loss: 0.7585 - val_accuracy: 0.7937 - val_precision_16: 0.7983 - val_recall_16: 0.7881 - val_f2_score: 0.7901
Epoch 36/150
198/198 [==============================] - 1s 5ms/step - loss: 0.2511 - accuracy: 0.9416 - precision_16: 0.9445 - recall_16: 0.9405 - f2_score: 0.9413 - val_loss: 0.5596 - val_accuracy: 0.8620 - val_precision_16: 0.8694 - val_recall_16: 0.8620 - val_f2_score: 0.8635
Epoch 37/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1652 - accuracy: 0.9796 - precision_16: 0.9797 - recall_16: 0.9793 - f2_score: 0.9794 - val_loss: 0.4899 - val_accuracy: 0.8706 - val_precision_16: 0.8725 - val_recall_16: 0.8663 - val_f2_score: 0.8675
Epoch 38/150
198/198 [==============================] - 1s 5ms/step - loss: 0.2575 - accuracy: 0.9386 - precision_16: 0.9403 - recall_16: 0.9367 - f2_score: 0.9374 - val_loss: 0.5706 - val_accuracy: 0.8521 - val_precision_16: 0.8592 - val_recall_16: 0.8506 - val_f2_score: 0.8523
Epoch 39/150
198/198 [==============================] - 1s 5ms/step - loss: 0.2146 - accuracy: 0.9574 - precision_16: 0.9589 - recall_16: 0.9563 - f2_score: 0.9568 - val_loss: 0.8021 - val_accuracy: 0.8350 - val_precision_16: 0.8345 - val_recall_16: 0.8321 - val_f2_score: 0.8326
Epoch 40/150
198/198 [==============================] - 2s 9ms/step - loss: 0.2313 - accuracy: 0.9508 - precision_16: 0.9528 - recall_16: 0.9492 - f2_score: 0.9499 - val_loss: 0.6192 - val_accuracy: 0.8578 - val_precision_16: 0.8586 - val_recall_16: 0.8549 - val_f2_score: 0.8556
Epoch 41/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1705 - accuracy: 0.9767 - precision_16: 0.9772 - recall_16: 0.9761 - f2_score: 0.9763 - val_loss: 0.4996 - val_accuracy: 0.8805 - val_precision_16: 0.8816 - val_recall_16: 0.8791 - val_f2_score: 0.8796
Epoch 42/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1576 - accuracy: 0.9812 - precision_16: 0.9821 - recall_16: 0.9805 - f2_score: 0.9808 - val_loss: 0.4856 - val_accuracy: 0.8933 - val_precision_16: 0.8930 - val_recall_16: 0.8905 - val_f2_score: 0.8910
Epoch 43/150
198/198 [==============================] - 1s 6ms/step - loss: 0.1619 - accuracy: 0.9767 - precision_16: 0.9770 - recall_16: 0.9763 - f2_score: 0.9764 - val_loss: 0.7238 - val_accuracy: 0.8563 - val_precision_16: 0.8594 - val_recall_16: 0.8521 - val_f2_score: 0.8535
Epoch 44/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1749 - accuracy: 0.9712 - precision_16: 0.9721 - recall_16: 0.9707 - f2_score: 0.9710 - val_loss: 0.5140 - val_accuracy: 0.8791 - val_precision_16: 0.8827 - val_recall_16: 0.8777 - val_f2_score: 0.8787
Epoch 45/150
198/198 [==============================] - 2s 10ms/step - loss: 0.2403 - accuracy: 0.9467 - precision_16: 0.9477 - recall_16: 0.9459 - f2_score: 0.9462 - val_loss: 0.7699 - val_accuracy: 0.7866 - val_precision_16: 0.7894 - val_recall_16: 0.7838 - val_f2_score: 0.7849
Epoch 46/150
198/198 [==============================] - 1s 6ms/step - loss: 0.2550 - accuracy: 0.9410 - precision_16: 0.9421 - recall_16: 0.9399 - f2_score: 0.9403 - val_loss: 0.5920 - val_accuracy: 0.8521 - val_precision_16: 0.8569 - val_recall_16: 0.8521 - val_f2_score: 0.8530
Epoch 47/150
198/198 [==============================] - 1s 5ms/step - loss: 0.2230 - accuracy: 0.9581 - precision_16: 0.9591 - recall_16: 0.9568 - f2_score: 0.9573 - val_loss: 0.6068 - val_accuracy: 0.8620 - val_precision_16: 0.8655 - val_recall_16: 0.8606 - val_f2_score: 0.8616
Epoch 48/150
198/198 [==============================] - 1s 6ms/step - loss: 0.1518 - accuracy: 0.9843 - precision_16: 0.9846 - recall_16: 0.9840 - f2_score: 0.9841 - val_loss: 0.5280 - val_accuracy: 0.8777 - val_precision_16: 0.8814 - val_recall_16: 0.8777 - val_f2_score: 0.8784
Epoch 49/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1410 - accuracy: 0.9877 - precision_16: 0.9878 - recall_16: 0.9873 - f2_score: 0.9874 - val_loss: 0.5128 - val_accuracy: 0.8819 - val_precision_16: 0.8818 - val_recall_16: 0.8805 - val_f2_score: 0.8808
Epoch 50/150
198/198 [==============================] - 1s 6ms/step - loss: 0.1291 - accuracy: 0.9911 - precision_16: 0.9913 - recall_16: 0.9911 - f2_score: 0.9912 - val_loss: 0.5419 - val_accuracy: 0.8777 - val_precision_16: 0.8787 - val_recall_16: 0.8762 - val_f2_score: 0.8767
Epoch 51/150
198/198 [==============================] - 2s 9ms/step - loss: 0.1270 - accuracy: 0.9897 - precision_16: 0.9897 - recall_16: 0.9897 - f2_score: 0.9897 - val_loss: 0.5264 - val_accuracy: 0.8890 - val_precision_16: 0.8890 - val_recall_16: 0.8890 - val_f2_score: 0.8890
Epoch 52/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1206 - accuracy: 0.9911 - precision_16: 0.9915 - recall_16: 0.9908 - f2_score: 0.9909 - val_loss: 0.5368 - val_accuracy: 0.8762 - val_precision_16: 0.8771 - val_recall_16: 0.8734 - val_f2_score: 0.8741
Epoch 53/150
198/198 [==============================] - 1s 5ms/step - loss: 0.4396 - accuracy: 0.8851 - precision_16: 0.8866 - recall_16: 0.8831 - f2_score: 0.8838 - val_loss: 0.7672 - val_accuracy: 0.8563 - val_precision_16: 0.8623 - val_recall_16: 0.8549 - val_f2_score: 0.8564
Epoch 54/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1880 - accuracy: 0.9685 - precision_16: 0.9696 - recall_16: 0.9682 - f2_score: 0.9685 - val_loss: 0.5979 - val_accuracy: 0.8862 - val_precision_16: 0.8910 - val_recall_16: 0.8834 - val_f2_score: 0.8849
Epoch 55/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1376 - accuracy: 0.9894 - precision_16: 0.9894 - recall_16: 0.9886 - f2_score: 0.9888 - val_loss: 0.5508 - val_accuracy: 0.8819 - val_precision_16: 0.8856 - val_recall_16: 0.8805 - val_f2_score: 0.8815
Epoch 56/150
198/198 [==============================] - 2s 10ms/step - loss: 0.1592 - accuracy: 0.9777 - precision_16: 0.9782 - recall_16: 0.9775 - f2_score: 0.9777 - val_loss: 0.6695 - val_accuracy: 0.8549 - val_precision_16: 0.8584 - val_recall_16: 0.8535 - val_f2_score: 0.8545
Epoch 57/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1690 - accuracy: 0.9741 - precision_16: 0.9746 - recall_16: 0.9733 - f2_score: 0.9735 - val_loss: 0.7579 - val_accuracy: 0.8108 - val_precision_16: 0.8152 - val_recall_16: 0.8094 - val_f2_score: 0.8105
Epoch 58/150
198/198 [==============================] - 1s 5ms/step - loss: 0.2280 - accuracy: 0.9521 - precision_16: 0.9540 - recall_16: 0.9508 - f2_score: 0.9514 - val_loss: 0.5955 - val_accuracy: 0.8734 - val_precision_16: 0.8743 - val_recall_16: 0.8706 - val_f2_score: 0.8713
Epoch 59/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1815 - accuracy: 0.9680 - precision_16: 0.9685 - recall_16: 0.9672 - f2_score: 0.9675 - val_loss: 0.6377 - val_accuracy: 0.8421 - val_precision_16: 0.8499 - val_recall_16: 0.8378 - val_f2_score: 0.8402
Epoch 60/150
198/198 [==============================] - 1s 7ms/step - loss: 0.1644 - accuracy: 0.9756 - precision_16: 0.9758 - recall_16: 0.9752 - f2_score: 0.9753 - val_loss: 0.5421 - val_accuracy: 0.8805 - val_precision_16: 0.8841 - val_recall_16: 0.8791 - val_f2_score: 0.8801
Epoch 61/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1343 - accuracy: 0.9872 - precision_16: 0.9873 - recall_16: 0.9870 - f2_score: 0.9871 - val_loss: 0.5441 - val_accuracy: 0.8848 - val_precision_16: 0.8884 - val_recall_16: 0.8834 - val_f2_score: 0.8844
Epoch 62/150
198/198 [==============================] - 2s 8ms/step - loss: 0.1236 - accuracy: 0.9913 - precision_16: 0.9916 - recall_16: 0.9913 - f2_score: 0.9914 - val_loss: 0.6374 - val_accuracy: 0.8492 - val_precision_16: 0.8502 - val_recall_16: 0.8478 - val_f2_score: 0.8483
Epoch 63/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1467 - accuracy: 0.9790 - precision_16: 0.9792 - recall_16: 0.9782 - f2_score: 0.9784 - val_loss: 0.6315 - val_accuracy: 0.8663 - val_precision_16: 0.8711 - val_recall_16: 0.8649 - val_f2_score: 0.8661
Epoch 64/150
198/198 [==============================] - 1s 5ms/step - loss: 0.5893 - accuracy: 0.8329 - precision_16: 0.8458 - recall_16: 0.8204 - f2_score: 0.8254 - val_loss: 1.0592 - val_accuracy: 0.6671 - val_precision_16: 0.7039 - val_recall_16: 0.6358 - val_f2_score: 0.6484
Epoch 65/150
198/198 [==============================] - 1s 5ms/step - loss: 0.4263 - accuracy: 0.8834 - precision_16: 0.8908 - recall_16: 0.8741 - f2_score: 0.8774 - val_loss: 0.6070 - val_accuracy: 0.8478 - val_precision_16: 0.8520 - val_recall_16: 0.8350 - val_f2_score: 0.8383
Epoch 66/150
198/198 [==============================] - 1s 5ms/step - loss: 0.3986 - accuracy: 0.8959 - precision_16: 0.9008 - recall_16: 0.8892 - f2_score: 0.8915 - val_loss: 0.6210 - val_accuracy: 0.8307 - val_precision_16: 0.8341 - val_recall_16: 0.8222 - val_f2_score: 0.8245
Epoch 67/150
198/198 [==============================] - 1s 6ms/step - loss: 0.2618 - accuracy: 0.9475 - precision_16: 0.9499 - recall_16: 0.9457 - f2_score: 0.9466 - val_loss: 0.6052 - val_accuracy: 0.8407 - val_precision_16: 0.8464 - val_recall_16: 0.8307 - val_f2_score: 0.8338
Epoch 68/150
198/198 [==============================] - 2s 8ms/step - loss: 0.2497 - accuracy: 0.9513 - precision_16: 0.9544 - recall_16: 0.9475 - f2_score: 0.9488 - val_loss: 0.5814 - val_accuracy: 0.8535 - val_precision_16: 0.8602 - val_recall_16: 0.8492 - val_f2_score: 0.8514
Epoch 69/150
198/198 [==============================] - 1s 6ms/step - loss: 0.2584 - accuracy: 0.9459 - precision_16: 0.9485 - recall_16: 0.9435 - f2_score: 0.9445 - val_loss: 0.6199 - val_accuracy: 0.8393 - val_precision_16: 0.8463 - val_recall_16: 0.8378 - val_f2_score: 0.8395
Epoch 70/150
198/198 [==============================] - 1s 5ms/step - loss: 0.2216 - accuracy: 0.9633 - precision_16: 0.9647 - recall_16: 0.9603 - f2_score: 0.9612 - val_loss: 0.6257 - val_accuracy: 0.8350 - val_precision_16: 0.8357 - val_recall_16: 0.8321 - val_f2_score: 0.8329
Epoch 71/150
198/198 [==============================] - 1s 6ms/step - loss: 0.2142 - accuracy: 0.9625 - precision_16: 0.9643 - recall_16: 0.9616 - f2_score: 0.9621 - val_loss: 0.6570 - val_accuracy: 0.8563 - val_precision_16: 0.8633 - val_recall_16: 0.8535 - val_f2_score: 0.8554
Epoch 72/150
198/198 [==============================] - 1s 6ms/step - loss: 0.2180 - accuracy: 0.9598 - precision_16: 0.9614 - recall_16: 0.9568 - f2_score: 0.9577 - val_loss: 0.9147 - val_accuracy: 0.7824 - val_precision_16: 0.7867 - val_recall_16: 0.7767 - val_f2_score: 0.7787
Epoch 73/150
198/198 [==============================] - 2s 9ms/step - loss: 0.2547 - accuracy: 0.9432 - precision_16: 0.9455 - recall_16: 0.9419 - f2_score: 0.9426 - val_loss: 0.6527 - val_accuracy: 0.8350 - val_precision_16: 0.8386 - val_recall_16: 0.8279 - val_f2_score: 0.8300
Epoch 74/150
198/198 [==============================] - 1s 6ms/step - loss: 0.1779 - accuracy: 0.9796 - precision_16: 0.9804 - recall_16: 0.9790 - f2_score: 0.9792 - val_loss: 0.6599 - val_accuracy: 0.8478 - val_precision_16: 0.8502 - val_recall_16: 0.8478 - val_f2_score: 0.8483
Epoch 75/150
198/198 [==============================] - 1s 5ms/step - loss: 0.2305 - accuracy: 0.9557 - precision_16: 0.9576 - recall_16: 0.9543 - f2_score: 0.9549 - val_loss: 0.6000 - val_accuracy: 0.8506 - val_precision_16: 0.8514 - val_recall_16: 0.8478 - val_f2_score: 0.8485
Epoch 76/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1732 - accuracy: 0.9796 - precision_16: 0.9802 - recall_16: 0.9785 - f2_score: 0.9788 - val_loss: 0.6323 - val_accuracy: 0.8606 - val_precision_16: 0.8629 - val_recall_16: 0.8592 - val_f2_score: 0.8599
Epoch 77/150
198/198 [==============================] - 1s 5ms/step - loss: 0.3302 - accuracy: 0.9209 - precision_16: 0.9234 - recall_16: 0.9180 - f2_score: 0.9191 - val_loss: 0.9954 - val_accuracy: 0.7895 - val_precision_16: 0.7977 - val_recall_16: 0.7852 - val_f2_score: 0.7877
Epoch 78/150
198/198 [==============================] - 1s 5ms/step - loss: 0.2255 - accuracy: 0.9578 - precision_16: 0.9581 - recall_16: 0.9563 - f2_score: 0.9567 - val_loss: 0.6100 - val_accuracy: 0.8663 - val_precision_16: 0.8661 - val_recall_16: 0.8649 - val_f2_score: 0.8651
Epoch 79/150
198/198 [==============================] - 2s 9ms/step - loss: 0.1657 - accuracy: 0.9824 - precision_16: 0.9827 - recall_16: 0.9823 - f2_score: 0.9824 - val_loss: 0.6789 - val_accuracy: 0.8236 - val_precision_16: 0.8257 - val_recall_16: 0.8222 - val_f2_score: 0.8229
Epoch 80/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1615 - accuracy: 0.9828 - precision_16: 0.9834 - recall_16: 0.9815 - f2_score: 0.9819 - val_loss: 0.6180 - val_accuracy: 0.8677 - val_precision_16: 0.8686 - val_recall_16: 0.8649 - val_f2_score: 0.8656
Epoch 81/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1630 - accuracy: 0.9810 - precision_16: 0.9812 - recall_16: 0.9801 - f2_score: 0.9803 - val_loss: 0.8624 - val_accuracy: 0.8037 - val_precision_16: 0.8063 - val_recall_16: 0.7994 - val_f2_score: 0.8008
Epoch 82/150
198/198 [==============================] - 1s 5ms/step - loss: 0.2100 - accuracy: 0.9603 - precision_16: 0.9611 - recall_16: 0.9584 - f2_score: 0.9589 - val_loss: 0.6904 - val_accuracy: 0.8464 - val_precision_16: 0.8510 - val_recall_16: 0.8450 - val_f2_score: 0.8462
Epoch 83/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1958 - accuracy: 0.9663 - precision_16: 0.9672 - recall_16: 0.9649 - f2_score: 0.9653 - val_loss: 0.6507 - val_accuracy: 0.8478 - val_precision_16: 0.8500 - val_recall_16: 0.8464 - val_f2_score: 0.8471
Epoch 84/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1849 - accuracy: 0.9720 - precision_16: 0.9726 - recall_16: 0.9709 - f2_score: 0.9712 - val_loss: 0.7321 - val_accuracy: 0.8450 - val_precision_16: 0.8462 - val_recall_16: 0.8450 - val_f2_score: 0.8452
Epoch 85/150
198/198 [==============================] - 2s 9ms/step - loss: 0.2152 - accuracy: 0.9589 - precision_16: 0.9603 - recall_16: 0.9574 - f2_score: 0.9580 - val_loss: 0.8111 - val_accuracy: 0.8336 - val_precision_16: 0.8401 - val_recall_16: 0.8293 - val_f2_score: 0.8314
Epoch 86/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1537 - accuracy: 0.9834 - precision_16: 0.9838 - recall_16: 0.9831 - f2_score: 0.9832 - val_loss: 0.6572 - val_accuracy: 0.8748 - val_precision_16: 0.8784 - val_recall_16: 0.8734 - val_f2_score: 0.8744
Epoch 87/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1401 - accuracy: 0.9875 - precision_16: 0.9878 - recall_16: 0.9872 - f2_score: 0.9873 - val_loss: 0.6731 - val_accuracy: 0.8620 - val_precision_16: 0.8643 - val_recall_16: 0.8606 - val_f2_score: 0.8613
Epoch 88/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1339 - accuracy: 0.9896 - precision_16: 0.9897 - recall_16: 0.9892 - f2_score: 0.9893 - val_loss: 0.6134 - val_accuracy: 0.8691 - val_precision_16: 0.8714 - val_recall_16: 0.8677 - val_f2_score: 0.8685
Epoch 89/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1337 - accuracy: 0.9884 - precision_16: 0.9888 - recall_16: 0.9881 - f2_score: 0.9883 - val_loss: 0.8877 - val_accuracy: 0.8051 - val_precision_16: 0.8057 - val_recall_16: 0.8023 - val_f2_score: 0.8030
Epoch 90/150
198/198 [==============================] - 2s 8ms/step - loss: 0.3584 - accuracy: 0.9163 - precision_16: 0.9192 - recall_16: 0.9149 - f2_score: 0.9157 - val_loss: 0.6509 - val_accuracy: 0.8492 - val_precision_16: 0.8520 - val_recall_16: 0.8435 - val_f2_score: 0.8452
Epoch 91/150
198/198 [==============================] - 1s 6ms/step - loss: 0.2340 - accuracy: 0.9516 - precision_16: 0.9533 - recall_16: 0.9502 - f2_score: 0.9508 - val_loss: 0.6795 - val_accuracy: 0.8521 - val_precision_16: 0.8541 - val_recall_16: 0.8492 - val_f2_score: 0.8502
Epoch 92/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1493 - accuracy: 0.9843 - precision_16: 0.9848 - recall_16: 0.9840 - f2_score: 0.9842 - val_loss: 0.7115 - val_accuracy: 0.8535 - val_precision_16: 0.8531 - val_recall_16: 0.8506 - val_f2_score: 0.8511
Epoch 93/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1278 - accuracy: 0.9922 - precision_16: 0.9926 - recall_16: 0.9922 - f2_score: 0.9923 - val_loss: 0.6257 - val_accuracy: 0.8649 - val_precision_16: 0.8655 - val_recall_16: 0.8606 - val_f2_score: 0.8616
Epoch 94/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1199 - accuracy: 0.9940 - precision_16: 0.9940 - recall_16: 0.9938 - f2_score: 0.9939 - val_loss: 0.6250 - val_accuracy: 0.8634 - val_precision_16: 0.8645 - val_recall_16: 0.8620 - val_f2_score: 0.8625
Epoch 95/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1096 - accuracy: 0.9959 - precision_16: 0.9959 - recall_16: 0.9959 - f2_score: 0.9959 - val_loss: 0.6181 - val_accuracy: 0.8691 - val_precision_16: 0.8741 - val_recall_16: 0.8691 - val_f2_score: 0.8701
Epoch 96/150
198/198 [==============================] - 2s 9ms/step - loss: 0.2948 - accuracy: 0.9388 - precision_16: 0.9415 - recall_16: 0.9372 - f2_score: 0.9380 - val_loss: 1.7677 - val_accuracy: 0.6856 - val_precision_16: 0.6863 - val_recall_16: 0.6785 - val_f2_score: 0.6801
Epoch 97/150
198/198 [==============================] - 1s 6ms/step - loss: 0.2710 - accuracy: 0.9388 - precision_16: 0.9406 - recall_16: 0.9364 - f2_score: 0.9372 - val_loss: 0.6701 - val_accuracy: 0.8563 - val_precision_16: 0.8559 - val_recall_16: 0.8535 - val_f2_score: 0.8540
Epoch 98/150
198/198 [==============================] - 1s 6ms/step - loss: 0.1596 - accuracy: 0.9783 - precision_16: 0.9796 - recall_16: 0.9782 - f2_score: 0.9784 - val_loss: 0.6562 - val_accuracy: 0.8606 - val_precision_16: 0.8616 - val_recall_16: 0.8592 - val_f2_score: 0.8597
Epoch 99/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1292 - accuracy: 0.9910 - precision_16: 0.9911 - recall_16: 0.9910 - f2_score: 0.9910 - val_loss: 0.6626 - val_accuracy: 0.8649 - val_precision_16: 0.8670 - val_recall_16: 0.8620 - val_f2_score: 0.8630
Epoch 100/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1347 - accuracy: 0.9859 - precision_16: 0.9862 - recall_16: 0.9859 - f2_score: 0.9860 - val_loss: 1.0611 - val_accuracy: 0.8122 - val_precision_16: 0.8120 - val_recall_16: 0.8108 - val_f2_score: 0.8110
Epoch 101/150
198/198 [==============================] - 2s 9ms/step - loss: 0.1948 - accuracy: 0.9641 - precision_16: 0.9653 - recall_16: 0.9633 - f2_score: 0.9637 - val_loss: 1.7693 - val_accuracy: 0.6970 - val_precision_16: 0.7007 - val_recall_16: 0.6927 - val_f2_score: 0.6943
Epoch 102/150
198/198 [==============================] - 1s 5ms/step - loss: 0.2248 - accuracy: 0.9540 - precision_16: 0.9555 - recall_16: 0.9536 - f2_score: 0.9540 - val_loss: 0.6356 - val_accuracy: 0.8620 - val_precision_16: 0.8643 - val_recall_16: 0.8606 - val_f2_score: 0.8613
Epoch 103/150
198/198 [==============================] - 1s 6ms/step - loss: 0.1242 - accuracy: 0.9915 - precision_16: 0.9916 - recall_16: 0.9913 - f2_score: 0.9914 - val_loss: 0.6286 - val_accuracy: 0.8649 - val_precision_16: 0.8698 - val_recall_16: 0.8649 - val_f2_score: 0.8659
Epoch 104/150
198/198 [==============================] - 1s 6ms/step - loss: 0.1205 - accuracy: 0.9921 - precision_16: 0.9922 - recall_16: 0.9918 - f2_score: 0.9919 - val_loss: 0.6397 - val_accuracy: 0.8734 - val_precision_16: 0.8745 - val_recall_16: 0.8720 - val_f2_score: 0.8725
Epoch 105/150
198/198 [==============================] - 1s 6ms/step - loss: 0.1063 - accuracy: 0.9957 - precision_16: 0.9957 - recall_16: 0.9957 - f2_score: 0.9957 - val_loss: 0.6452 - val_accuracy: 0.8464 - val_precision_16: 0.8477 - val_recall_16: 0.8393 - val_f2_score: 0.8409
Epoch 106/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1080 - accuracy: 0.9948 - precision_16: 0.9949 - recall_16: 0.9946 - f2_score: 0.9947 - val_loss: 0.7240 - val_accuracy: 0.8492 - val_precision_16: 0.8524 - val_recall_16: 0.8464 - val_f2_score: 0.8476
Epoch 107/150
198/198 [==============================] - 2s 9ms/step - loss: 0.1583 - accuracy: 0.9755 - precision_16: 0.9765 - recall_16: 0.9748 - f2_score: 0.9752 - val_loss: 0.8856 - val_accuracy: 0.8094 - val_precision_16: 0.8105 - val_recall_16: 0.8094 - val_f2_score: 0.8096
Epoch 108/150
198/198 [==============================] - 1s 6ms/step - loss: 0.3005 - accuracy: 0.9258 - precision_16: 0.9273 - recall_16: 0.9247 - f2_score: 0.9252 - val_loss: 0.6815 - val_accuracy: 0.8535 - val_precision_16: 0.8539 - val_recall_16: 0.8478 - val_f2_score: 0.8490
Epoch 109/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1462 - accuracy: 0.9831 - precision_16: 0.9840 - recall_16: 0.9828 - f2_score: 0.9830 - val_loss: 0.6957 - val_accuracy: 0.8378 - val_precision_16: 0.8388 - val_recall_16: 0.8364 - val_f2_score: 0.8369
Epoch 110/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1222 - accuracy: 0.9910 - precision_16: 0.9911 - recall_16: 0.9908 - f2_score: 0.9909 - val_loss: 0.6128 - val_accuracy: 0.8805 - val_precision_16: 0.8814 - val_recall_16: 0.8777 - val_f2_score: 0.8784
Epoch 111/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1186 - accuracy: 0.9911 - precision_16: 0.9911 - recall_16: 0.9911 - f2_score: 0.9911 - val_loss: 0.6561 - val_accuracy: 0.8478 - val_precision_16: 0.8478 - val_recall_16: 0.8478 - val_f2_score: 0.8478
Epoch 112/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1128 - accuracy: 0.9924 - precision_16: 0.9927 - recall_16: 0.9922 - f2_score: 0.9923 - val_loss: 1.2059 - val_accuracy: 0.7724 - val_precision_16: 0.7803 - val_recall_16: 0.7681 - val_f2_score: 0.7705
Epoch 113/150
198/198 [==============================] - 2s 8ms/step - loss: 0.2606 - accuracy: 0.9413 - precision_16: 0.9421 - recall_16: 0.9399 - f2_score: 0.9403 - val_loss: 1.5140 - val_accuracy: 0.7127 - val_precision_16: 0.7157 - val_recall_16: 0.7127 - val_f2_score: 0.7133
Epoch 114/150
198/198 [==============================] - 1s 5ms/step - loss: 0.2013 - accuracy: 0.9647 - precision_16: 0.9661 - recall_16: 0.9639 - f2_score: 0.9644 - val_loss: 0.8788 - val_accuracy: 0.8478 - val_precision_16: 0.8484 - val_recall_16: 0.8435 - val_f2_score: 0.8445
Epoch 115/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1236 - accuracy: 0.9907 - precision_16: 0.9910 - recall_16: 0.9905 - f2_score: 0.9906 - val_loss: 0.6172 - val_accuracy: 0.8691 - val_precision_16: 0.8688 - val_recall_16: 0.8663 - val_f2_score: 0.8668
Epoch 116/150
198/198 [==============================] - 1s 6ms/step - loss: 0.1078 - accuracy: 0.9954 - precision_16: 0.9957 - recall_16: 0.9954 - f2_score: 0.9955 - val_loss: 0.6553 - val_accuracy: 0.8748 - val_precision_16: 0.8773 - val_recall_16: 0.8748 - val_f2_score: 0.8753
Epoch 117/150
198/198 [==============================] - 1s 6ms/step - loss: 0.1023 - accuracy: 0.9956 - precision_16: 0.9956 - recall_16: 0.9954 - f2_score: 0.9954 - val_loss: 0.6330 - val_accuracy: 0.8691 - val_precision_16: 0.8702 - val_recall_16: 0.8677 - val_f2_score: 0.8682
Epoch 118/150
198/198 [==============================] - 2s 11ms/step - loss: 0.1181 - accuracy: 0.9891 - precision_16: 0.9892 - recall_16: 0.9891 - f2_score: 0.9891 - val_loss: 0.6941 - val_accuracy: 0.8634 - val_precision_16: 0.8631 - val_recall_16: 0.8606 - val_f2_score: 0.8611
Epoch 119/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1625 - accuracy: 0.9728 - precision_16: 0.9739 - recall_16: 0.9723 - f2_score: 0.9726 - val_loss: 0.8588 - val_accuracy: 0.8492 - val_precision_16: 0.8508 - val_recall_16: 0.8435 - val_f2_score: 0.8450
Epoch 120/150
198/198 [==============================] - 1s 5ms/step - loss: 0.2615 - accuracy: 0.9405 - precision_16: 0.9421 - recall_16: 0.9392 - f2_score: 0.9398 - val_loss: 0.6947 - val_accuracy: 0.8563 - val_precision_16: 0.8575 - val_recall_16: 0.8563 - val_f2_score: 0.8566
Epoch 121/150
198/198 [==============================] - 1s 6ms/step - loss: 0.1334 - accuracy: 0.9850 - precision_16: 0.9851 - recall_16: 0.9850 - f2_score: 0.9850 - val_loss: 0.7359 - val_accuracy: 0.8634 - val_precision_16: 0.8641 - val_recall_16: 0.8592 - val_f2_score: 0.8602
Epoch 122/150
198/198 [==============================] - 1s 6ms/step - loss: 0.1510 - accuracy: 0.9782 - precision_16: 0.9789 - recall_16: 0.9778 - f2_score: 0.9781 - val_loss: 0.8408 - val_accuracy: 0.8407 - val_precision_16: 0.8431 - val_recall_16: 0.8407 - val_f2_score: 0.8412
Epoch 123/150
198/198 [==============================] - 1s 6ms/step - loss: 0.1927 - accuracy: 0.9652 - precision_16: 0.9667 - recall_16: 0.9644 - f2_score: 0.9649 - val_loss: 0.8234 - val_accuracy: 0.8606 - val_precision_16: 0.8629 - val_recall_16: 0.8592 - val_f2_score: 0.8599
Epoch 124/150
198/198 [==============================] - 2s 8ms/step - loss: 0.1165 - accuracy: 0.9929 - precision_16: 0.9930 - recall_16: 0.9926 - f2_score: 0.9927 - val_loss: 0.6311 - val_accuracy: 0.8748 - val_precision_16: 0.8784 - val_recall_16: 0.8734 - val_f2_score: 0.8744
Epoch 125/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1134 - accuracy: 0.9919 - precision_16: 0.9919 - recall_16: 0.9916 - f2_score: 0.9917 - val_loss: 0.6774 - val_accuracy: 0.8521 - val_precision_16: 0.8543 - val_recall_16: 0.8506 - val_f2_score: 0.8514
Epoch 126/150
198/198 [==============================] - 1s 5ms/step - loss: 0.2206 - accuracy: 0.9528 - precision_16: 0.9537 - recall_16: 0.9519 - f2_score: 0.9523 - val_loss: 0.9410 - val_accuracy: 0.8450 - val_precision_16: 0.8474 - val_recall_16: 0.8450 - val_f2_score: 0.8454
Epoch 127/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1254 - accuracy: 0.9886 - precision_16: 0.9886 - recall_16: 0.9884 - f2_score: 0.9885 - val_loss: 0.6882 - val_accuracy: 0.8663 - val_precision_16: 0.8663 - val_recall_16: 0.8663 - val_f2_score: 0.8663
Epoch 128/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1112 - accuracy: 0.9918 - precision_16: 0.9919 - recall_16: 0.9916 - f2_score: 0.9917 - val_loss: 0.7477 - val_accuracy: 0.8634 - val_precision_16: 0.8634 - val_recall_16: 0.8634 - val_f2_score: 0.8634
Epoch 129/150
198/198 [==============================] - 2s 9ms/step - loss: 0.2127 - accuracy: 0.9603 - precision_16: 0.9616 - recall_16: 0.9597 - f2_score: 0.9600 - val_loss: 0.9470 - val_accuracy: 0.7838 - val_precision_16: 0.7892 - val_recall_16: 0.7724 - val_f2_score: 0.7757
Epoch 130/150
198/198 [==============================] - 1s 6ms/step - loss: 0.2969 - accuracy: 0.9328 - precision_16: 0.9346 - recall_16: 0.9318 - f2_score: 0.9324 - val_loss: 0.6555 - val_accuracy: 0.8592 - val_precision_16: 0.8590 - val_recall_16: 0.8578 - val_f2_score: 0.8580
Epoch 131/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1336 - accuracy: 0.9869 - precision_16: 0.9869 - recall_16: 0.9866 - f2_score: 0.9866 - val_loss: 0.6021 - val_accuracy: 0.8634 - val_precision_16: 0.8645 - val_recall_16: 0.8620 - val_f2_score: 0.8625
Epoch 132/150
198/198 [==============================] - 1s 6ms/step - loss: 0.1077 - accuracy: 0.9959 - precision_16: 0.9959 - recall_16: 0.9959 - f2_score: 0.9959 - val_loss: 0.6691 - val_accuracy: 0.8734 - val_precision_16: 0.8743 - val_recall_16: 0.8706 - val_f2_score: 0.8713
Epoch 133/150
198/198 [==============================] - 1s 6ms/step - loss: 0.1090 - accuracy: 0.9938 - precision_16: 0.9938 - recall_16: 0.9937 - f2_score: 0.9937 - val_loss: 0.6593 - val_accuracy: 0.8819 - val_precision_16: 0.8827 - val_recall_16: 0.8777 - val_f2_score: 0.8787
Epoch 134/150
198/198 [==============================] - 1s 7ms/step - loss: 0.1067 - accuracy: 0.9937 - precision_16: 0.9940 - recall_16: 0.9935 - f2_score: 0.9936 - val_loss: 0.6550 - val_accuracy: 0.8706 - val_precision_16: 0.8743 - val_recall_16: 0.8706 - val_f2_score: 0.8713
Epoch 135/150
198/198 [==============================] - 2s 8ms/step - loss: 0.0968 - accuracy: 0.9972 - precision_16: 0.9973 - recall_16: 0.9972 - f2_score: 0.9972 - val_loss: 0.6144 - val_accuracy: 0.8663 - val_precision_16: 0.8684 - val_recall_16: 0.8634 - val_f2_score: 0.8644
Epoch 136/150
198/198 [==============================] - 1s 5ms/step - loss: 0.0922 - accuracy: 0.9968 - precision_16: 0.9972 - recall_16: 0.9968 - f2_score: 0.9969 - val_loss: 0.7016 - val_accuracy: 0.8748 - val_precision_16: 0.8748 - val_recall_16: 0.8748 - val_f2_score: 0.8748
Epoch 137/150
198/198 [==============================] - 1s 5ms/step - loss: 0.0905 - accuracy: 0.9968 - precision_16: 0.9970 - recall_16: 0.9967 - f2_score: 0.9967 - val_loss: 0.7547 - val_accuracy: 0.8649 - val_precision_16: 0.8670 - val_recall_16: 0.8620 - val_f2_score: 0.8630
Epoch 138/150
198/198 [==============================] - 1s 5ms/step - loss: 0.3142 - accuracy: 0.9259 - precision_16: 0.9279 - recall_16: 0.9244 - f2_score: 0.9251 - val_loss: 0.6527 - val_accuracy: 0.8634 - val_precision_16: 0.8645 - val_recall_16: 0.8620 - val_f2_score: 0.8625
Epoch 139/150
198/198 [==============================] - 1s 5ms/step - loss: 0.3038 - accuracy: 0.9261 - precision_16: 0.9279 - recall_16: 0.9242 - f2_score: 0.9249 - val_loss: 0.7629 - val_accuracy: 0.8450 - val_precision_16: 0.8474 - val_recall_16: 0.8450 - val_f2_score: 0.8454
Epoch 140/150
198/198 [==============================] - 2s 9ms/step - loss: 0.1386 - accuracy: 0.9854 - precision_16: 0.9858 - recall_16: 0.9854 - f2_score: 0.9855 - val_loss: 0.7761 - val_accuracy: 0.8578 - val_precision_16: 0.8588 - val_recall_16: 0.8563 - val_f2_score: 0.8568
Epoch 141/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1168 - accuracy: 0.9911 - precision_16: 0.9915 - recall_16: 0.9911 - f2_score: 0.9912 - val_loss: 0.6234 - val_accuracy: 0.8748 - val_precision_16: 0.8773 - val_recall_16: 0.8748 - val_f2_score: 0.8753
Epoch 142/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1027 - accuracy: 0.9957 - precision_16: 0.9957 - recall_16: 0.9957 - f2_score: 0.9957 - val_loss: 0.6044 - val_accuracy: 0.8762 - val_precision_16: 0.8762 - val_recall_16: 0.8762 - val_f2_score: 0.8762
Epoch 143/150
198/198 [==============================] - 1s 5ms/step - loss: 0.0964 - accuracy: 0.9976 - precision_16: 0.9976 - recall_16: 0.9976 - f2_score: 0.9976 - val_loss: 0.5993 - val_accuracy: 0.8791 - val_precision_16: 0.8816 - val_recall_16: 0.8791 - val_f2_score: 0.8796
Epoch 144/150
198/198 [==============================] - 1s 5ms/step - loss: 0.0935 - accuracy: 0.9972 - precision_16: 0.9973 - recall_16: 0.9972 - f2_score: 0.9972 - val_loss: 0.6662 - val_accuracy: 0.8492 - val_precision_16: 0.8504 - val_recall_16: 0.8492 - val_f2_score: 0.8495
Epoch 145/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1564 - accuracy: 0.9744 - precision_16: 0.9751 - recall_16: 0.9741 - f2_score: 0.9743 - val_loss: 1.0639 - val_accuracy: 0.7297 - val_precision_16: 0.7307 - val_recall_16: 0.7255 - val_f2_score: 0.7265
Epoch 146/150
198/198 [==============================] - 2s 9ms/step - loss: 0.3437 - accuracy: 0.9225 - precision_16: 0.9239 - recall_16: 0.9215 - f2_score: 0.9220 - val_loss: 0.7079 - val_accuracy: 0.8578 - val_precision_16: 0.8596 - val_recall_16: 0.8535 - val_f2_score: 0.8547
Epoch 147/150
198/198 [==============================] - 1s 6ms/step - loss: 0.1594 - accuracy: 0.9744 - precision_16: 0.9751 - recall_16: 0.9736 - f2_score: 0.9739 - val_loss: 0.8247 - val_accuracy: 0.8492 - val_precision_16: 0.8516 - val_recall_16: 0.8492 - val_f2_score: 0.8497
Epoch 148/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1586 - accuracy: 0.9718 - precision_16: 0.9727 - recall_16: 0.9712 - f2_score: 0.9715 - val_loss: 0.6767 - val_accuracy: 0.8393 - val_precision_16: 0.8390 - val_recall_16: 0.8378 - val_f2_score: 0.8381
Epoch 149/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1275 - accuracy: 0.9875 - precision_16: 0.9875 - recall_16: 0.9872 - f2_score: 0.9872 - val_loss: 0.7541 - val_accuracy: 0.8179 - val_precision_16: 0.8207 - val_recall_16: 0.8137 - val_f2_score: 0.8150
Epoch 150/150
198/198 [==============================] - 1s 5ms/step - loss: 0.1536 - accuracy: 0.9763 - precision_16: 0.9770 - recall_16: 0.9753 - f2_score: 0.9757 - val_loss: 0.7656 - val_accuracy: 0.8535 - val_precision_16: 0.8571 - val_recall_16: 0.8535 - val_f2_score: 0.8542
In [ ]:
# Summary
mlp.summary()
Model: "sequential_7"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 flatten_7 (Flatten)         (None, 65536)             0         
                                                                 
 dense_9 (Dense)             (None, 32)                2097184   
                                                                 
 dense_10 (Dense)            (None, 16)                528       
                                                                 
 dense_11 (Dense)            (None, 4)                 68        
                                                                 
=================================================================
Total params: 2097780 (8.00 MB)
Trainable params: 2097780 (8.00 MB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
In [ ]:
# Plot Training F2 score
plt.figure(figsize = (10, 4))
plt.subplot(1, 2, 1)
plt.plot(mlp_history.history['f2_score'], label = 'training')

# Plot Val F2 score
plt.ylabel('Accuracy %')
plt.title('Training')
plt.plot(mlp_history.history['val_f2_score'], label = 'validation')
plt.title('F2 Score')
plt.legend()

# Plot Training Loss
plt.subplot(1, 2, 2)
plt.plot(mlp_history.history['loss'], label = 'training')
plt.ylabel('Training Loss')
plt.xlabel('epochs')

# Plot Validation Loss
plt.plot(mlp_history.history['val_loss'], label = 'validation')
plt.xlabel('epochs')
plt.title('Loss')
plt.legend()
plt.show()

# After training, predict classes on the test set
y_pred = mlp.predict(X_test)
y_pred_classes = np.argmax(y_pred, axis=1)
y_true_classes = np.argmax(y_test_encoded, axis=1)

# Generate the confusion matrix
cm = confusion_matrix(y_true_classes, y_pred_classes)

# Plotting the confusion matrix
fig, ax = plt.subplots(figsize=(8, 8))
ax.matshow(cm, cmap=plt.cm.Blues, alpha=0.3)
for i in range(cm.shape[0]):
    for j in range(cm.shape[1]):
        ax.text(x=j, y=i, s=cm[i, j], va='center', ha='center')

plt.xlabel('Predicted labels')
plt.ylabel('True labels')
plt.title('Confusion Matrix')
plt.show()

# Print the scores
print(f"Validation Scores:\n\tF2 Score: {mlp_history.history['val_f2_score']}\n\tRecall: {mlp_history.history['val_recall_16']}")
print(f"\tPrecision: {mlp_history.history['val_precision_16']}\n\tAccuracy: {mlp_history.history['val_accuracy']}")
No description has been provided for this image
 1/22 [>.............................] - ETA: 0s22/22 [==============================] - 0s 3ms/step
No description has been provided for this image
Validation Scores:
	F2 Score: [0.28009337186813354, 0.2758274972438812, 0.31012868881225586, 0.4770932197570801, 0.5999397039413452, 0.6638755798339844, 0.6994742751121521, 0.7521739602088928, 0.7395591139793396, 0.7806638479232788, 0.8031609058380127, 0.7710011601448059, 0.8216029405593872, 0.8072393536567688, 0.8192737102508545, 0.84670490026474, 0.8445078134536743, 0.812624990940094, 0.8528236746788025, 0.6471428275108337, 0.8466362953186035, 0.8404679894447327, 0.8299943208694458, 0.844266951084137, 0.8566134572029114, 0.7855717539787292, 0.8430812954902649, 0.863493800163269, 0.8276352882385254, 0.8345221877098083, 0.8862019777297974, 0.8646724224090576, 0.8587297201156616, 0.8795900940895081, 0.7900741696357727, 0.863493800163269, 0.8675214052200317, 0.8523375391960144, 0.832621693611145, 0.855637788772583, 0.8795900940895081, 0.8909763693809509, 0.8535194993019104, 0.8786670565605164, 0.7849002480506897, 0.8530332446098328, 0.8615779280662537, 0.878416895866394, 0.8807627558708191, 0.8767435550689697, 0.8890469670295715, 0.8741457462310791, 0.8563692569732666, 0.8848674297332764, 0.8815152645111084, 0.8544573783874512, 0.8105413913726807, 0.8712983131408691, 0.8402281403541565, 0.88009113073349, 0.8843634128570557, 0.8482779264450073, 0.8660968542098999, 0.6483898758888245, 0.8383318781852722, 0.8245363831520081, 0.8338092565536499, 0.8513975739479065, 0.8395097255706787, 0.8328588008880615, 0.8554319739341736, 0.7786651849746704, 0.8300057649612427, 0.8482779264450073, 0.8485194444656372, 0.8599088788032532, 0.7876712679862976, 0.8651110529899597, 0.822892963886261, 0.8656036257743835, 0.8007979393005371, 0.8461538553237915, 0.8470956683158875, 0.8451906442642212, 0.8314318060874939, 0.8743948340415955, 0.8613324761390686, 0.8684510588645935, 0.8029612898826599, 0.8452109694480896, 0.8501850962638855, 0.8511243462562561, 0.8615779280662537, 0.8625105619430542, 0.8701224327087402, 0.6800683736801147, 0.853971004486084, 0.8596640229225159, 0.8630020022392273, 0.8110415935516357, 0.6943256258964539, 0.8613324761390686, 0.8658502101898193, 0.8724737763404846, 0.8409351110458374, 0.847578227519989, 0.8096187114715576, 0.8490028977394104, 0.8368915915489197, 0.878416895866394, 0.8477951884269714, 0.7705479860305786, 0.7132687568664551, 0.8444887399673462, 0.8667805790901184, 0.875320315361023, 0.86820387840271, 0.8610873222351074, 0.8449701070785522, 0.856573760509491, 0.8601537942886353, 0.8411614298820496, 0.8599088788032532, 0.8743948340415955, 0.8513667583465576, 0.8454312086105347, 0.866287350654602, 0.8634423613548279, 0.7757142782211304, 0.8579966425895691, 0.8625105619430542, 0.8712983131408691, 0.8786670565605164, 0.8712983131408691, 0.8644260764122009, 0.874822199344635, 0.8630020022392273, 0.8625105619430542, 0.8454312086105347, 0.8568174839019775, 0.875320315361023, 0.8762446641921997, 0.8795900940895081, 0.8494593501091003, 0.7264957427978516, 0.8547008633613586, 0.8497011065483093, 0.8380762338638306, 0.8150469660758972, 0.8542141318321228]
	Recall: [0.23897582292556763, 0.23470839858055115, 0.2674253284931183, 0.4295874834060669, 0.566145122051239, 0.6315789222717285, 0.6813655495643616, 0.7382645606994629, 0.7254623174667358, 0.7695590257644653, 0.7951635718345642, 0.7624466419219971, 0.8136557340621948, 0.7994310259819031, 0.8150782585144043, 0.8406828045845032, 0.8421052694320679, 0.8093883395195007, 0.8506401181221008, 0.6443812251091003, 0.8449501991271973, 0.837837815284729, 0.825035572052002, 0.8421052694320679, 0.8549075126647949, 0.7837837934494019, 0.8406828045845032, 0.8620198965072632, 0.8264580368995667, 0.8321479558944702, 0.8862019777297974, 0.8634423613548279, 0.8577525019645691, 0.8790895938873291, 0.788051187992096, 0.8620198965072632, 0.866287350654602, 0.8506401181221008, 0.8321479558944702, 0.8549075126647949, 0.8790895938873291, 0.8904694318771362, 0.8520625829696655, 0.8776671290397644, 0.7837837934494019, 0.8520625829696655, 0.8605974316596985, 0.8776671290397644, 0.8805121183395386, 0.8762446641921997, 0.8890469670295715, 0.8733997344970703, 0.8549075126647949, 0.883357048034668, 0.8805121183395386, 0.8534850478172302, 0.8093883395195007, 0.8705547451972961, 0.837837815284729, 0.8790895938873291, 0.883357048034668, 0.8477951884269714, 0.8648648858070374, 0.6358463764190674, 0.8349928855895996, 0.8221905827522278, 0.8307254910469055, 0.8492176532745361, 0.837837815284729, 0.8321479558944702, 0.8534850478172302, 0.7766714096069336, 0.8278805017471313, 0.8477951884269714, 0.8477951884269714, 0.8591749668121338, 0.7852062582969666, 0.8648648858070374, 0.8221905827522278, 0.8648648858070374, 0.7994310259819031, 0.8449501991271973, 0.846372663974762, 0.8449501991271973, 0.829302966594696, 0.8733997344970703, 0.8605974316596985, 0.8677098155021667, 0.8022759556770325, 0.8435277342796326, 0.8492176532745361, 0.8506401181221008, 0.8605974316596985, 0.8620198965072632, 0.8691322803497314, 0.6785206198692322, 0.8534850478172302, 0.8591749668121338, 0.8620198965072632, 0.8108108043670654, 0.6927453875541687, 0.8605974316596985, 0.8648648858070374, 0.8719772696495056, 0.8392603397369385, 0.846372663974762, 0.8093883395195007, 0.8477951884269714, 0.8364153504371643, 0.8776671290397644, 0.8477951884269714, 0.7681365609169006, 0.712660014629364, 0.8435277342796326, 0.866287350654602, 0.874822199344635, 0.8677098155021667, 0.8605974316596985, 0.8435277342796326, 0.8563300371170044, 0.8591749668121338, 0.8406828045845032, 0.8591749668121338, 0.8733997344970703, 0.8506401181221008, 0.8449501991271973, 0.866287350654602, 0.8634423613548279, 0.7724039554595947, 0.8577525019645691, 0.8620198965072632, 0.8705547451972961, 0.8776671290397644, 0.8705547451972961, 0.8634423613548279, 0.874822199344635, 0.8620198965072632, 0.8620198965072632, 0.8449501991271973, 0.8563300371170044, 0.874822199344635, 0.8762446641921997, 0.8790895938873291, 0.8492176532745361, 0.7254623174667358, 0.8534850478172302, 0.8492176532745361, 0.837837815284729, 0.8136557340621948, 0.8534850478172302]
	Precision: [0.8983957171440125, 0.9217877388000488, 0.8584474921226501, 0.8555240631103516, 0.788118839263916, 0.8345864415168762, 0.7826797366142273, 0.8134796023368835, 0.801886796951294, 0.8284839391708374, 0.8368263244628906, 0.8072289228439331, 0.8550074696540833, 0.8400598168373108, 0.8364963531494141, 0.8716813921928406, 0.8542568683624268, 0.8258345723152161, 0.8616714477539062, 0.6584302186965942, 0.8534482717514038, 0.8511560559272766, 0.8504399061203003, 0.8530259132385254, 0.8635057210922241, 0.7928057312965393, 0.8528138399124146, 0.8694404363632202, 0.8323782086372375, 0.8441558480262756, 0.8862019777297974, 0.8696275353431702, 0.8626609444618225, 0.8815976977348328, 0.7982708811759949, 0.8694404363632202, 0.8724928498268127, 0.8591954112052917, 0.8345221281051636, 0.8585714101791382, 0.8815976977348328, 0.8930099606513977, 0.8593974113464355, 0.8826895356178284, 0.7893982529640198, 0.8569384813308716, 0.8655221462249756, 0.881428599357605, 0.8817663788795471, 0.8787446618080139, 0.8890469670295715, 0.8771428465843201, 0.8622668385505676, 0.8909612894058228, 0.8855507969856262, 0.8583691120147705, 0.8151862621307373, 0.8742856979370117, 0.8499278426170349, 0.8841201663017273, 0.8884119987487793, 0.8502140045166016, 0.8710601925849915, 0.7039369940757751, 0.8519593477249146, 0.8340548276901245, 0.8463768362998962, 0.860230565071106, 0.8462643623352051, 0.8357142806053162, 0.8633093237876892, 0.7867435216903687, 0.8386167287826538, 0.8502140045166016, 0.8514285683631897, 0.8628571629524231, 0.7976878881454468, 0.8660968542098999, 0.8257142901420593, 0.868571400642395, 0.8063127398490906, 0.8510028719902039, 0.8500000238418579, 0.8461538553237915, 0.8400576114654541, 0.8783977031707764, 0.8642857074737549, 0.8714285492897034, 0.8057143092155457, 0.852011501789093, 0.8540772795677185, 0.8530670404434204, 0.8655221462249756, 0.8644793033599854, 0.8741058707237244, 0.6863309144973755, 0.855920135974884, 0.8616262674331665, 0.8669527769088745, 0.811965823173523, 0.70071941614151, 0.8642857074737549, 0.8698140382766724, 0.8744650483131409, 0.8477011322975159, 0.8524355292320251, 0.8105413317680359, 0.8538681864738464, 0.8388017416000366, 0.881428599357605, 0.8477951884269714, 0.7803468108177185, 0.7157142758369446, 0.8483548164367676, 0.8687589168548584, 0.8773181438446045, 0.8701854348182678, 0.8630527853965759, 0.8507890701293945, 0.8575498461723328, 0.8640915751457214, 0.8430812954902649, 0.8628571629524231, 0.8783977031707764, 0.854285717010498, 0.8473609089851379, 0.866287350654602, 0.8634423613548279, 0.7892441749572754, 0.8589743375778198, 0.8644793033599854, 0.8742856979370117, 0.8826895356178284, 0.8742856979370117, 0.8683834075927734, 0.874822199344635, 0.8669527769088745, 0.8644793033599854, 0.8473609089851379, 0.8587731719017029, 0.8773181438446045, 0.8762446641921997, 0.8815976977348328, 0.8504273295402527, 0.730659008026123, 0.8595988750457764, 0.851640522480011, 0.8390313386917114, 0.8206599950790405, 0.8571428656578064]
	Accuracy: [0.4879089593887329, 0.529160737991333, 0.5618776679039001, 0.6955903172492981, 0.7012802362442017, 0.7524893283843994, 0.7411095499992371, 0.779516339302063, 0.7638691067695618, 0.8065434098243713, 0.8179231882095337, 0.7908961772918701, 0.8364153504371643, 0.8150782585144043, 0.829302966594696, 0.8563300371170044, 0.8477951884269714, 0.8207681179046631, 0.8563300371170044, 0.6486486196517944, 0.8477951884269714, 0.8477951884269714, 0.8321479558944702, 0.8477951884269714, 0.8605974316596985, 0.788051187992096, 0.8435277342796326, 0.8648648858070374, 0.829302966594696, 0.8364153504371643, 0.8862019777297974, 0.8648648858070374, 0.8577525019645691, 0.8805121183395386, 0.7937411069869995, 0.8620198965072632, 0.8705547451972961, 0.8520625829696655, 0.8349928855895996, 0.8577525019645691, 0.8805121183395386, 0.8933143615722656, 0.8563300371170044, 0.8790895938873291, 0.7866287231445312, 0.8520625829696655, 0.8620198965072632, 0.8776671290397644, 0.8819345831871033, 0.8776671290397644, 0.8890469670295715, 0.8762446641921997, 0.8563300371170044, 0.8862019777297974, 0.8819345831871033, 0.8549075126647949, 0.8108108043670654, 0.8733997344970703, 0.8421052694320679, 0.8805121183395386, 0.8847795128822327, 0.8492176532745361, 0.866287350654602, 0.6671408414840698, 0.8477951884269714, 0.8307254910469055, 0.8406828045845032, 0.8534850478172302, 0.8392603397369385, 0.8349928855895996, 0.8563300371170044, 0.7823613286018372, 0.8349928855895996, 0.8477951884269714, 0.8506401181221008, 0.8605974316596985, 0.7894737124443054, 0.866287350654602, 0.8236131072044373, 0.8677098155021667, 0.8036984205245972, 0.846372663974762, 0.8477951884269714, 0.8449501991271973, 0.8335704207420349, 0.874822199344635, 0.8620198965072632, 0.8691322803497314, 0.8051208853721619, 0.8492176532745361, 0.8520625829696655, 0.8534850478172302, 0.8648648858070374, 0.8634423613548279, 0.8691322803497314, 0.6856330037117004, 0.8563300371170044, 0.8605974316596985, 0.8648648858070374, 0.8122332692146301, 0.6970127820968628, 0.8620198965072632, 0.8648648858070374, 0.8733997344970703, 0.846372663974762, 0.8492176532745361, 0.8093883395195007, 0.8534850478172302, 0.837837815284729, 0.8805121183395386, 0.8477951884269714, 0.7724039554595947, 0.712660014629364, 0.8477951884269714, 0.8691322803497314, 0.874822199344635, 0.8691322803497314, 0.8634423613548279, 0.8492176532745361, 0.8563300371170044, 0.8634423613548279, 0.8406828045845032, 0.8605974316596985, 0.874822199344635, 0.8520625829696655, 0.8449501991271973, 0.866287350654602, 0.8634423613548279, 0.7837837934494019, 0.8591749668121338, 0.8634423613548279, 0.8733997344970703, 0.8819345831871033, 0.8705547451972961, 0.866287350654602, 0.874822199344635, 0.8648648858070374, 0.8634423613548279, 0.8449501991271973, 0.8577525019645691, 0.874822199344635, 0.8762446641921997, 0.8790895938873291, 0.8492176532745361, 0.7297297120094299, 0.8577525019645691, 0.8492176532745361, 0.8392603397369385, 0.8179231882095337, 0.8534850478172302]
In [ ]:
# LARSON CODE

from sklearn import metrics as mt
from matplotlib import pyplot as plt
import seaborn as sns
%matplotlib inline

def compare_mlp_cnn(cnn, mlp, X_test, y_test, labels = 'auto'):
    plt.figure(figsize = (15, 5))
    if cnn is not None:
        yhat_cnn = np.argmax(cnn.predict(X_test), axis = 1)
        acc_cnn = mt.accuracy_score(y_test, yhat_cnn)
        plt.subplot(1, 2, 1)
        cm = mt.confusion_matrix(y_test, yhat_cnn)
        cm = cm / np.sum(cm, axis = 1)[:, np.newaxis]
        sns.heatmap(cm, annot = True, fmt = '.2f', xticklabels = labels, yticklabels = labels)
        plt.title(f'CNN: {acc_cnn:.4f}')
    
    if mlp is not None:
        yhat_mlp = np.argmax(mlp.predict(X_test), axis = 1)
        acc_mlp = mt.accuracy_score(y_test, yhat_mlp)
        plt.subplot(1, 2, 2)
        cm = mt.confusion_matrix(y_test, yhat_mlp)
        cm = cm / np.sum(cm, axis = 1)[:, np.newaxis]
        sns.heatmap(cm, annot = True, fmt = '.2f', xticklabels = labels, yticklabels = labels)
        plt.title(f'MLP: {acc_mlp:.4f}')
In [ ]:
# Compare Best Convolution Against MLP
compare_mlp_cnn(Conv2B, mlp, X_test, y_test)
22/22 [==============================] - 0s 4ms/step
22/22 [==============================] - 0s 3ms/step
No description has been provided for this image
In [ ]:
# Predict the probabilities for each class (CNN and MLP)
y_pred_cnn = Conv2B.predict(X_test)
y_pred_mlp = mlp.predict(X_test)

# Plot ROC and calculate AUC (CNN)
plot_roc_curve(y_test, y_pred_cnn, NUM_CLASSES, 'CNN')

# Plot ROC and calculate AUC (MLP)
plot_roc_curve(y_test, y_pred_mlp, NUM_CLASSES, 'MLP')
22/22 [==============================] - 0s 4ms/step
22/22 [==============================] - 0s 3ms/step
No description has been provided for this image
No description has been provided for this image
  • F2 Score: 0.8542
  • Recall: 0.8535
  • Precision: 0.8571
  • Accuracy: 0.8535

The provided confusion matrix and performance metrics for the Multi-Layer Perceptron (MLP) from Scikit-learn show a model that performs well but does not reach the high levels of accuracy, precision, recall, and F2 score seen in our best CNN.

CNNs are inherently designed to recognize spatial hierarchies in data. By using convolutional layers, CNNs automatically and adaptively learn spatial hierarchies of features from input images. This is especially beneficial for image data like MRIs where tumor patterns are spatially localized and can vary in size and shape. The MLP lacks this level of sophisticated feature extraction since it treats input data as flat vectors, which can lead to a loss of spatial context that is crucial for accurate image classification.

CNNs are also more robust to variations in the image due to their architecture. They can handle variations in the position and orientation of objects in an image, which is essential for medical images where tumors can appear in different locations and scales.

The CNN's F2 score of 0.9872 is significantly higher than the MLP's 0.8542. This indicates that the CNN is more effective at minimizing false negatives, an important consideration in medical diagnostics where missing a positive case can have serious consequences. The CNN also has a higher recall, meaning it correctly identifies more true positives than the MLP, which is crucial for tumor detection tasks. The CNN's accuracy is higher, showing that overall, it classifies the images correctly more often than the MLP.

The superiority of the CNN's architecture for image-related tasks is clear. Its ability to process images in a way that retains the spatial and structural information makes it better suited for medical imaging challenges. The CNN's robustness, efficiency in handling high-dimensional data, and superior feature extraction capabilities directly translate to its higher performance metrics compared to the MLP.

4: Exceptional Credit - Transfer Learning with VGG-19¶

For exceptional credit, we implemented a deep learning model to classify brain tumor MRI scans using transfer learning with the VGG-19 network in PyTorch.

First, we import the necessary libraries and modules for the task.

In [ ]:
# Import Statements
from sklearn.metrics import (
    fbeta_score, 
    precision_score, 
    recall_score, 
    confusion_matrix,
    roc_auc_score, 
    roc_curve,
)
from sklearn.preprocessing import label_binarize
import matplotlib.pyplot as plt
import numpy as np
import torch
import torch.nn as nn
from torch import Tensor
import torchvision.models as models
from torch.utils.data import (
    DataLoader, 
    TensorDataset, 
    Dataset,
)

First, we'll load the VGG-19 model, which is pre-trained on ImageNet, with batch normalization. The pre-training helps in leveraging learned features from a vast and diverse dataset. Leveraging a pre-trained model accelerates the learning proces and often leads to better performance when the available dataset is not extremely large.

Then, we modify the model for the current task. We replace the last layer of the classifier with a new linear layer to output 4 classes that correspond to our tumor categories. Modifying the model to suit the specific requirements of our tasks demonstrates an effective approach to applying deep learning in medical imaging.

Then, we transfer the model to a GPU is available for efficient computation.

In [ ]:
# Modify VGG19 Model
vgg19 = models.vgg19_bn(pretrained = True)
vgg19.classifier[6] = nn.Linear(vgg19.classifier[6].in_features, 4)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") 
vgg19.to(device)
Out[ ]:
VGG(
  (features): Sequential(
    (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (2): ReLU(inplace=True)
    (3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (5): ReLU(inplace=True)
    (6): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (7): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (8): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (9): ReLU(inplace=True)
    (10): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (11): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (12): ReLU(inplace=True)
    (13): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (14): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (15): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (16): ReLU(inplace=True)
    (17): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (18): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (19): ReLU(inplace=True)
    (20): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (21): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (22): ReLU(inplace=True)
    (23): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (24): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (25): ReLU(inplace=True)
    (26): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (27): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (28): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (29): ReLU(inplace=True)
    (30): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (31): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (32): ReLU(inplace=True)
    (33): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (34): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (35): ReLU(inplace=True)
    (36): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (37): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (38): ReLU(inplace=True)
    (39): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (40): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (41): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (42): ReLU(inplace=True)
    (43): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (44): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (45): ReLU(inplace=True)
    (46): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (47): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (48): ReLU(inplace=True)
    (49): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (50): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (51): ReLU(inplace=True)
    (52): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  )
  (avgpool): AdaptiveAvgPool2d(output_size=(7, 7))
  (classifier): Sequential(
    (0): Linear(in_features=25088, out_features=4096, bias=True)
    (1): ReLU(inplace=True)
    (2): Dropout(p=0.5, inplace=False)
    (3): Linear(in_features=4096, out_features=4096, bias=True)
    (4): ReLU(inplace=True)
    (5): Dropout(p=0.5, inplace=False)
    (6): Linear(in_features=4096, out_features=4, bias=True)
  )
)

After modifying the model, we prepare the datasets for PyTorch. We convert the MRI images (stored as numpy arrays) to PyTorch tensors. Since VGG-19 expects 3-channel RGB input, the grayscale MRI images are repeated across three channels.

Then we create datasets and data loaders. The image tensors and their labels are wrapped in 'TensorDataset', and 'DataLoader' is used to efficiently iterate over the dataset in batches during training and testing.

In [ ]:
# 2. Prepare datasets for PyTorch
# Convert the loaded MRI images (numpy arrays) to PyTorch tensors
# Convert grayscale images to RGB format by repeating the single channel three times
train_tensors: Tensor = (
    torch.tensor(train_paths)
    .float()
    .view(-1, 1, IMG_SIZE, IMG_SIZE)
    .repeat(1, 3, 1, 1)  # GRAYSCALE IMAGE TO RGB FORMAT FOR torch.models 
)
test_tensors: Tensor = (
    torch.tensor(test_paths)
    .float()
    .view(-1, 1, IMG_SIZE, IMG_SIZE)
    .repeat(1, 3, 1, 1)  # GRAYSCALE IMAGE TO RGB FORMAT FOR torch.models
)

# Convert the image labels into tensor format for PyTorch
train_labels: Tensor = torch.tensor(train_labels).long()
test_labels: Tensor = torch.tensor(test_labels).long()

# Create PyTorch datasets using the image tensors and their corresponding labels
train_dataset: Dataset = TensorDataset(train_tensors, train_labels)
test_dataset: Dataset = TensorDataset(test_tensors, test_labels)

# Define data loaders to efficiently load image batches during training/testing
BATCH_SIZE: int = 64
train_loader: DataLoader = (
    DataLoader(
        train_dataset, 
        batch_size=BATCH_SIZE, 
        shuffle=True,  # Shuffling helps in better model generalization
    )
)
test_loader: DataLoader = (
    DataLoader(
        test_dataset, 
        batch_size=BATCH_SIZE, 
        shuffle=False,  # No need to shuffle test data
    )
)

Finally, we get to train the model. We use cross-entropy loss because it is suitable for multiclass classification tasks. We also chose to use Adam optimizer for optimization.

The model is trained over a specified number of epochs, where in each epoch, the model's parameters are updated to minimize the loss.

The F2 score, which emphasizes recall, is computed for each epoch to reflect the model's performance in correctly identifying tumor classes. We then store the training loss and F2 scores for each epoch to monitor and analyze the model's learning process.

In [ ]:
# Train The Model
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(vgg19.parameters(), lr=0.001)
train_losses, f2_scores = [], []

NUM_EPOCHS = 10
for epoch in range(NUM_EPOCHS):
    vgg19.train()  # Set the model to training mode
    train_loss = 0.0  # Initialize training loss
    all_labels, all_predictions = [], []

    # For each batch of images and labels
    for images, labels in train_loader:
        images = images.to(device)  # Send data to the device (GPU/CPU)
        labels = labels.to(device)
        
        optimizer.zero_grad()  # Zero out any previous gradients
        outputs = vgg19(images)  # Forward pass: Get model predictions
        _, predicted = torch.max(outputs.data, 1)  # Get the class with highest predicted probability
        loss = criterion(outputs, labels)  # Compute the loss for this batch
        loss.backward()  # Backward pass: compute the gradient of the loss w.r.t. model parameters
        optimizer.step()  # Update the model's weights
        
        train_loss += loss.item()  # Accumulate the training loss

        # Storing labels & predictions for computing F2 score and confusion matrix
        all_labels.extend(labels.cpu().numpy())
        all_predictions.extend(predicted.cpu().numpy())

    # Compute the epoch's F2 score
    epoch_f2_score = fbeta_score(all_labels, all_predictions, average='macro', beta=2)
    epoch_loss = train_loss / len(train_loader)
    train_losses.append(epoch_loss)
    f2_scores.append(epoch_f2_score)

    # Print training loss and F2 score for this epoch        
    print(f"Epoch {epoch + 1}/{NUM_EPOCHS} - Training loss: {epoch_loss}, F2 Score: {epoch_f2_score}")
Epoch 1/10 - Training loss: 0.7113999189602004, F2 Score: 0.765055633742096
Epoch 2/10 - Training loss: 0.4275264526406924, F2 Score: 0.8488475010758443
Epoch 3/10 - Training loss: 0.3381458747718069, F2 Score: 0.879236050768266
Epoch 4/10 - Training loss: 0.30659198777543173, F2 Score: 0.8891253965114588
Epoch 5/10 - Training loss: 0.24672695696353913, F2 Score: 0.9108673835264186
Epoch 6/10 - Training loss: 0.28771249391138554, F2 Score: 0.9021895381454483
Epoch 7/10 - Training loss: 0.16328175332811143, F2 Score: 0.943971367981084
Epoch 8/10 - Training loss: 0.14742334050436814, F2 Score: 0.9487558418826917
Epoch 9/10 - Training loss: 0.1337707839699255, F2 Score: 0.9527968389489881
Epoch 10/10 - Training loss: 0.1328246280964878, F2 Score: 0.959669909710619

The output provides insight into the training process and performance of the adapted VGG-19 model over 10 epochs.

We see a decreasing training loss, starting from 0.7114 and going down to 0.1328 in the last epoch, indicating that the model is effectively learning from the training data. The decreasing loss means that the model's predictions are getting closer to the actual labels, improving its accuracy.

We also see an increasing F2 score, starting at 0.7650 and goes up to 0.9597, suggesting that the model is becoming better at correctly identifying the different classes, particularly focusing on reducing false negatives. Reducing false negatives is crucial in medical diagnosis tasks. A high F2 score is desirable in this case because it indicates that the model is reliable in detecting tumors.

The consistent improvement over epochs in both training loss and F2 score suggests that the model is learning effectively and generalizes well to the training data, which is a positive sign that the model will likely perform well on unseen data, assuming that the test data has a similar distribution to the training data.

Additionally, the F2 score reaching close to 0.96 in the final epoch is a strong indicator that the model is robust, especially in terms of its ability to minimize false negatives. In this medical context, the model's robustness is key to ensuring that the model can be trusted for medical diagnosis purposes, where missing a tumor can have serious implications.

In [ ]:
# Set Model To Eval Mode
vgg19.eval()

# Initialize Lists To Store Labels, Predictions
all_labels = []
all_prob_predictions = []

# We Won't Update Model
with torch.no_grad():
    for images, labels in test_loader:

        # Move Images, Labels To Device
        images = images.to(device)
        labels = labels.to(device)

        # Get Model Outputs
        outputs = vgg19(images)

        # Get Probabilities Using Softmax
        probabilities = torch.softmax(outputs, dim = 1)

        # Store True Labels, Probabilities
        all_labels.extend(labels.cpu().numpy())
        all_prob_predictions.extend(probabilities.cpu().numpy())

# Convert Lists -> NumPy Arrays
all_labels_array = np.array(all_labels)
all_prob_predictions_array = np.array(all_prob_predictions)

# Binarize Labels For Multi-Class ROC AUC
all_labels_binarized = label_binarize(all_labels_array, classes = range(4))

# Calculate AUC
auc_score = roc_auc_score(all_labels_binarized, all_prob_predictions_array, multi_class = 'ovr')

# Additional Metrics
all_predictions = np.argmax(all_prob_predictions_array, axis = 1)
precision = precision_score(all_labels_array, all_predictions, average = 'macro')
recall = recall_score(all_labels_array, all_predictions, average = 'macro')
f2_score = fbeta_score(all_labels_array, all_predictions, average = 'macro', beta = 2)
conf_matrix = confusion_matrix(all_labels_array, all_predictions)

# Print Metrics
print(f"Precision: {precision}, Recall: {recall}, F2 Score: {f2_score}, AUC: {auc_score}")
print(f"Confusion Matrix:\n{conf_matrix}")

# Plotting Loss Curve, F2 Score For Each Epoch
plt.figure(figsize = (12, 5))
plt.subplot(1, 2, 1)
plt.plot(range(1, NUM_EPOCHS + 1), train_losses, label = 'Training Loss')
plt.title('Loss Curve')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.subplot(1, 2, 2)
plt.plot(range(1, NUM_EPOCHS + 1), f2_scores, label = 'F2 Score')
plt.title('F2 Score per Epoch')
plt.xlabel('Epochs')
plt.ylabel('F2 Score')
plt.legend()
plt.show()

# Plot ROC Curve
fpr, tpr, _ = roc_curve(all_labels_binarized.ravel(), all_prob_predictions_array.ravel())
plt.figure()
plt.plot(fpr, tpr, color = 'darkorange', lw = 2, label = 'ROC curve (area = %0.2f)' % auc_score)
plt.plot([0, 1], [0, 1], color = 'navy', lw = 2, linestyle = '--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver Operating Characteristic')
plt.legend(loc = "lower right")
plt.show()
Precision: 0.8496126693519013, Recall: 0.7673202614379084, F2 Score: 0.7664428535216011, AUC: 0.9561580262712889
Confusion Matrix:
[[246  10  44   0]
 [ 31 215  59   1]
 [  0   0 405   0]
 [ 19  10 107 164]]
No description has been provided for this image
No description has been provided for this image

The output provides comprehensive performance metrics for the VGG-19 model on the test dataset for the brain tumor MRI scan classification task.

Precision (0.8496): High precision indicates that, among the instances the model predicted as positive, a large proportion was correct, meaning the model has a low rate of false positives.

Recall (0.7673): This metric is slightly lower than precision, suggesting that the model missed a notable number of positive cases (false negatives).

F2 Score (0.7664): This score, which gives more weight to recall, is similar to the recall value, indicating that the model's ability to correctly identify positive cases (especially important in medical diagnostics) is reasonable but could be improved.

AUC (0.9562): A high AUC value suggests that the model is quite capable of distinguishing between the different classes (e.g., types of tumors and no tumor).

The confusion matrix presents the model's predictions against the actual labels.

Most Confident in 'No Tumor' Class: The model has high accuracy for 'No Tumor' class (405 out of 405 correct).

Struggles with Meningioma and Pituitary: There are significant misclassifications between Meningioma (1) and Pituitary (3), indicating potential difficulty in distinguishing these classes.

Moderate Performance on Glioma: Glioma (0) is mostly classified correctly but has some confusion with Meningioma (1) and 'No Tumor' (2).

The plots for training loss and F2 Score over epochs provide a visual representation of the model's learning process:

The decreasing loss curve indicates that the model is increasingly fitting the training data well. The increasing F2 score curve shows the model’s growing ability to correctly identify positive cases, with more emphasis on reducing false negatives.

The ROC curve and the AUC value (0.9562) provide a graphical representation of the model’s diagnostic ability. A curve closer to the top-left corner indicates a better performance, so the outputted plot is a relatively good curve.

Overall, the model demonstrates strong capabilities in classifying brain tumor MRI scans, particularly doing well in identifying the 'No Tumor' class. However, there is room for improvement when differentiating between tumor types, particularly between Meningioma and Pituitary tumors.

The high AUC score indicates good model discrimination ability between the classes. The moderate recall and F2 score suggest a need for more tuning to improve the model's sensitivity to reduce the number of missed tumor cases.